An Easy Propaedeutics Into the New Physical and Mathematical Science of the Universal Law – ebook

For Non-Idiotic Scientists, Intelligent Light Workers and All Humans With a Sincere Quest for True Knowledge and Rapid Spiritual Evolution

Georgi Alexandrov Stankov, June 1, 2017

www.stankovuniversallaw.com

This ebook is also available in two different PDF formats: 

An Easy Propaedeutics

and here as PDF and ePub:

An Easy Propaedeutics Into the New Physical and Mathematical Science of the Universal Law – ebook

An Easy Propaedeutics Into the New Physical and Mathematical Science of the Universal Law

For Non-Idiotic Scientists, Intelligent Light Workers and All Humans With a Sincere Quest for True Knowledge and Rapid Spiritual Evolution

Georgi Stankov, June 1, 2017

PDF
ePub

Visit original source here.

 

Dear Sir,

Modern physics is, to use a popular modern term, essentially “fake science” and so is mathematics since 1931 when the famous Austrian mathematician Kurt Goedel showed beyond any doubt with his famous incompleteness theorem (in Über formal unentscheidbare Sätze der “Principia Mathematica” und verwandter Systemethat mathematics cannot prove its own validity as a hermeneutic discipline of abstract human thinking with its own means. Since then mathematics, and together with it all exact natural sciences that use mathematics as a tool to describe nature in terms of natural laws and mathematical models, exist in the famous Foundation Crisis of Mathematics and Science (Grundlagenkrise der Mathematik).

I hope you as a theoretician are well aware of this fact and consider it in your research. I say that because most scientists have swept this unpleasant truth with a huge broom under the carpet of total forgetfulness and live as innocent sinners in their self-afflicted illusion, called “physics” and “human science”.

Present-day physics is in big troubles as the standard model cannot explain most of the phenomena observed. It is unable to integrate gravitation with the other three fundamental forces and there is no theory of gravitation at all. This deficiency is well-known.

I made a survey on the main focus of research activities of ca. 1000 representative physicists worldwide as they presented themselves on their personal websites and found out that more than 60% of all physicists have dedicated their theoretical activities on improving or substituting the standard model which is still considered, out of inertia and lack of viable alternative solutions, to be the pinnacle of modern physics, incorporating classical quantum mechanics, QED and QCD with the theory of relativity, but not classical mechanics.

This is the most convincing proof that the standard model is “fake science” and that it must be substituted as it does not explain anything. It is very encouraging that the majority of physicists and scientists (theoreticians and mathematicians) understand and accept this stark and shocking fact.

When the Nobel Prize Committee awarded in 2015

Takaaki Kajita
Super-Kamiokande Collaboration
University of Tokyo, Kashiwa, Japan

and

Arthur B. McDonald
Sudbury Neutrino Observatory Collaboration
Queen’s University, Kingston, Canada

for their experimental work showing that neutrinos might have a mass, it had to admit in the press release that:

“The discovery led to the far-reaching conclusion that neutrinos, which for a long time were considered massless (?), must have some mass, however small.

For particle physics this was a historic discovery. Its Standard Model of the innermost workings of matter had been incredibly successful, having resisted all experimental challenges for more than twenty years. However, as it requires neutrinos to be massless, the new observations had clearly showed that the Standard Model cannot be the complete theory of the fundamental constituents of the universe.” (for more information read here)

Let me summarize some of the greatest blunders that have been made in physics so far and expose it as “fake science” only because physicists have not realized that their discipline is simply applied mathematics to the physical world and have not employed it appropriately to established axiomatic, formalistic standards. Therefore, before one can reform physics, one should apply rigidly and methodologically the principle of mathematical formalism as first introduced by Hilbert and led, through the famous Grundlagenstreit (Foundation dispute) between the two world wars in Europe, to Goedel’s irrefutable proof of the invalidity of mathematics and the acknowledgement of the Foundation Crisis of mathematics that simmered since the beginning of the 20th century after B. Russell presented his famous paradoxes (antinomies):

1) Neither photons, nor neutrinos are massless particles. Physicists have failed to understand epistemologically their own definition of mass, which is based on mathematics and is in fact  “energy relationship“. All particles and systems of nature have energy and thus mass (for further information read here).

2) This eliminates the ridiculous concept of “dark matter” that accounts for 95% of the total mass in the universe according to the current standard model in cosmology which is another epitome of “fake science” as the recent dispute on the inflation hypothesis not being a real science has truly revealed (read here). The 95% missing matter is the mass of the photon space-time which is now considered to be “massless”. I have shown how one can calculate the mass of photons very easily and from there calculate the mass of matter beginning with the chemical elements (read here  and see Table 1).

In this way one can easily integrate gravitation with the other three fundamental forces and explain for the first time the mechanism of gravitation by unifying classical mechanics with electromagnetism and quantum mechanics while eliminating the esoteric search for the hypothetical graviton, which is another epic blunder of physics (read here)

3) Charge does not exist. When the current definition of charge is written in the correct mathematical manner, which physicists have failed to do for almost four centuries (actually since Antiquity) since electricity is known, it can be easily shown that charge is a synonym, a pleonasm for “geometric area” and the SI unit 1 coulomb is equivalent to 1 square meter. Unforgivable flaw!

Read here: The Greatest Blunder of Science: „Electric Charge“ is a Synonym for „Geometric Area

And I can go on and on and list at least 20 further epic blunders of modern physics that make it a “fake science”. At the same time present-day physics can be very easily revised and turned from fake science into true science when one first resolves the foundation crisis of mathematics as I have already done in 1995 with the development of

The New Integrated Physical and Mathematical Axiomatics of the Universal Law

With this theoretical foundation I was able to prove that all current distinct physical laws that make up for the confusing stuff of physical textbooks nowadays are derivations and partial applications of one Universal Law of Nature as this was postulated by Einstein (world field equation, Weltformel), H. Weyl (unified field theory), and many other prominent physicists between the two world wars.

Read here: The Universal Law of Nature

Herewith, I strongly recommend you to revise your knowledge on physics which is as false as this science is fake and start with the new introduction into the Theory of Science of the Universal Law which I have just published as an ebook:

An Easy Propaedeutics Into the New Physical and Mathematical Science of the Universal Law – ebook

After you have grasped the basic tenets of the new theory, you can proceed with my scientific books and articles on the new physical and mathematical theory of the Universal Law that reduces physics to applied mathematics:

Let me assure you, with my best intentions, that you have only two options:

1) Outright rejection of my proposal based on prejudices and inappropriate high-esteem which are but a manifestation of personal fears that lead to ignorance or

2) Show discernment, open mind and intellectual curiosity and make a leap in your understanding of Nature.

I have dealt with the first response on the part of conventional scientists for more than 20 years since I published my first book on the Universal Law in 1997 and I am not impressed at all by this kind of stubborn attitude that only afflicts the person that expresses it.

Besides I know beyond any doubt that this year of 2017 is the year of the introduction of the new theory of the Universal Law on a global scale and thus I am making you a great favour to inform you in advance.

Then with the breakthrough of the new theory of the Universal Law nothing will remain the same in science and your allegedly secure position in your scientific institution will be just as ephemeral as the secure election of Hillary Clinton with “more than 95% certainty” as was claimed by the fake MSM. Then believe me, there is no difference between the fake MSM, which with their obvious lies are currently in a free fall, and present-day fake physics and science which will also cease to exist in their present form within the blink of an eye in the course of this year. Exactly like the fake MSM narrative has collapsed within a few weeks before and after the election of Trump, notwithstanding the fact that it has controlled the opinion of the masses for decades, if not centuries. The parallels are striking and that should convince you that your current scientific position is untenable.

It is your choice to accept this unconditional offer of infinite cognitive value or reject it and stay blind for the rest of your life and I hope you make the right choice. In this case I am on your side to help you make this giant leap in human awareness and leave the current condition of cognitive blindness.

Finally I would like to make you aware of my proposal (official announcement) to the international scientific community from July 2014 that is still valid.

With best regards

Dr. Georgi Stankov

Addendum:

The same letter, somewhat modified to account for the specific modern history of Russia, was sent in Russian language to ca. 1000 Russian physicists and academicians.

“В вопросах науки авторитет тысячи не стоит самых простейших доводов одного” (Галилео Галилей).

Уважаемый коллега,

Современная физика, используя популярный современный термин, по существу, представляет из себя “ложную науку”, как и математика после 1931 года, когда знаменитый австрийский математик Курт Гёдель, вне всякого сомнения, показал в своей знаменитой теореме о неполноте, что математика, как герменевтическая дисциплина абстрактного человеческого мышления не может доказать свою действительность своими собственными способами (Über formal unentscheidbare Sätze der “Principia Mathematica” und verwandter Systeme). С тех пор математика, а вместе с ней и все известные естественные науки, которые используют математику как инструмент для описания природы с позиции естественных законов и математических моделей, существуют в знаменитом “Кризисе оснований математики (Grundlagenkrise der Mathematik)“.

Надеюсь, что вы, как теоретик, хорошо знакомы с этим фактом и учитываете его в своих исследованиях. Я говорю вам об этом, так как большинство ученых замели эту горькую правду большим веником под “ковер полного забвенья” и живут как невинные грешники в своей самодовольной иллюзии под названием «физика» и «человеческая наука».

Физика сегодня в больших неприятностях из-за несостоятельности  Стандартной модели объяснить большинство наблюдаемых явлений. Она не способна интегрировать гравитацию с тремя другими фундаментальными силами, и теории тяготения, вдобавок, не существует вообще. Эта недостающая хорошо известна.

Я составил обзор основных направлений в исследовательской деятельности ок. 1000 представительных физиков во всем мире, как они представили это на персональных сайтах, и выяснил, что более 60% из них посвятили свои теоретические занятия попыткам усовершенствовать или заменить Стандартную модель, которая, по-прежнему, засчет инерции и отсутствия альтернативных жизнеспособных решений, считается верхушкой современной физики, включающей в себя классическую квантовую механику, КЭД и КХД с теорией относительности, без классической механики.

Это самое убедительное доказательство того, что Стандартная модель это “ложная наука” и ее нужно заменить, так как она ничего не объясняет. Очень воодушевляет то, что большинство физиков и ученых (теоретиков и математиков) понимают и принимают этот суровый и ошеломляющий факт.

Когда Нобелевский комитет вознаградил в 2015 году

Такааки Кадзита

Сотрудничество “Супер-Камиоканде”

Токийский университет, Касива, Япония

а также

Артура Б. Макдональда

Сотрудничество нейтринной обсерватории в Садбери

Королевский Университет, Кингстон, Канада

за их экспериментальную работу, показывающую возможность массы у нейтринов, он был вынужден признать, что:

“Открытие привело к далеко идущему выводу, что нейтрины, которые долгое время считались безмассовыми (?), должны иметь некоторую массу, хоть и малую.”

Для физики частиц это было историческим открытием. Ее Стандартная модель о глубинных работах материи до этого была невероятно успешной, выдерживая все экспериментальные вызовы на протяжении более двадцати лет. Однако, так как Стандартная модель полагается на отсутствие массы у нейтринов, ее нельзя соотнести с новыми наблюдениями, которые ясно показывают, что она не может быть полной теорией фундаментальных составляющих Вселенной” (для дополнительной информации читайте здесь).

Позвольте мне обобщить некоторые из величайших ошибок, до этого сделанных в физике, и разоблачить ее как “ложную науку”, потому что физики еще не осознали свою дисциплину как прикладную к физическому миру математику и не применили ее надлежащим образом к установленным аксиоматическим и формалистическим стандартам. Перед тем, как реформировать физику, нужно строго и в методологической манере применять принцип математического формализма, который был впервые представлен Гильбертом и привел путем знаменитого   Grundlagenstreit” (диспут об основаниях) между двумя мировыми войнами в Европе к неопровержимому доказательству Гёделя о недействительности математики и признанием кризиса оснований математики (закипевшим с начала XX века, после того, как Б. Рассел представил его знаменитые парадоксы (антиномии):

1) Ни фотоны, ни нейтрины безмассовыми частицами не являются.  Физики провалились понять эпистемологическое определение массы, которое основано на математике и на самом деле является “энергетическим соотношением”. Все частицы и системы природы несут энергию, и потому массу (для дополнительной информации смотрите здесь).

2) Данное наблюдение устраняет нелепую концепцию “темной материи”, которая составляет 95% от общей массы Вселенной согласно текущей Стандартной модели в космологии – еще одно воплощение “ложной науки”, что было темой недавного диспута об инфляционной модели и непричастности этой модели к настоящей науке (читайте здесь). 95% недостающей материи представляет собой массу фотонового пространства-времени, которое сейчас считается “безмассовым”. Я показал, как очень легко рассчитать массу фотонов и после этого подсчитать массу материи, начиная с химических элементов (читайте здесь и Таблицу 1).

Таким образом, можно легко интегрировать гравитацию с другими тремя фундаментальными силами и впервые объяснить механизм тяготения, объединив классическую механику с электромагнетизмом и квантовой механикой, одновременно исключив эзотерический поиск гипотетического гравитона, что еще одна грандиозная ошибка в физике (читайте здесь).

 3) “Заряда” не существует. Когда известное на сегодня определение заряда записано верным математическим путем, чего физики не делали на протяжении четырех столетий (вообще-то, с времен Античности), с момента открытия электричества, легко показать, что заряд это синоним,плеоназм “геометрической площади”, а единица системы СИ 1 куломбэквивалентна 1 квадратному метру. Непростительный недочёт!

Read hereThe Greatest Blunder of Science: „Electric Charge“ is a Synonym for „Geometric Area

И я могу продолжать до бесконечности, и перечислить еще как минимум 20 грандиозных ошибок современной физики, которые делают из нее “ложную науку”. В то же время современную физику можно очень легко пересмотреть и превратить из ложной науки в истинную, если сначала разрешить фундаментальный кризис математики, как я это уже сделал в 1995 году:

The New Integrated Physical and Mathematical Axiomatics of the Universal Law

С этим теоретическим основанием я смог доказать, что все текущие и  отдельные физические законы, которые в наши дни относятся к запутанному материалу в учебниках по физике, являются производными и частичными приложениями единого Универсального закона природы, что постулировалось Эйнштейном (мировым полевым уравнением, “Weltformel”) , Г. Вейлем (единой теорией поля) и многими другими состоявшимися физиками между двумя мировыми войнами.

Читайте здесьThe Universal Law of Nature

Настоящим я настоятельно рекомендую вам пересмотреть свои знания о физике, которые ложны, как ложна и эта наука – и начать с нового введения в Теорию Науки Универсального закона, которую я только что опубликовал в электронном виде:

An Easy Propaedeutics Into the New Physical and Mathematical Science of the Universal Law – ebook

Я также написал специальное популярное введение в новую теорию Всеобщего закона и ее последствия для науки, техники и общества на русском языке, которые помогут вам лучше понять масштабы этого революционного открытия:

Universalnii (Vseobshchii) zakon. Kratkoe vvedenie v obshchniu teoriiu nauki i vliijanie eio na obshtestvo

После того, как вы осознали основные принципы новой теории, вы можете приступить к изучению моих научных книг и статей про новую физико-математическую теорию Универсального закона, которая “сокращает” физику до прикладной математики:

The New Integrated Physical and Mathematical Axiomatics of the Universal Law

Volume II: The Universal Law. The General Theory of Physics and Cosmology (Full Version)

Volume II: The Universal Law. The General Theory of Physics and Cosmology (Concise Version)

Volume III: The General Theory of Biological Regulation. The Universal Law in Bio-Science and Medicine

Позвольте мне заверить вас, из лучших побуждений, что выбора у вас только 2:

1) Открытое опровержение моего предложения, основанное на предрассудках и нецелесообразной завышенной самооценке, что лишь проявление личных страхов, ведущих к невежеству;

2) Проявление проницательности, открытости и интеллектуальной любознательности, и совершение скачка в своем понимании Природы.

Мне приходилось иметь дело с первым ответом традиционных ученых вот уже более 20 лет, с тех пор, как я опубликовал свою первую книгу по Универсальному закону в 1997 году, и я далеко не впечатлен таким упрямым отношением, от которого страдает лишь сам  человек, который его выражает.

Кроме того, я уверен, вне всякого сомнения, что этот 2017 год станет годом введения новой теории Универсального закона на глобальном уровне, и поэтому оказываю вам большую услугу, сообщая об этом заранее.

Затем с прорывом Новой Теории Универсального Закона в науке ничто не останется прежним, и ваша якобы надежная позиция в вашем научном учреждении станет такой же эфемерной, как и гарантированное избрание Хиллари Клинтон с «вероятностью более 95%», как утверждали поддельные центральные СМИ на западе (поэтому я преимущественно читаю RT, Sputnik и другие российские издания). И тогда, поверьте мне, сотрется разница между поддельными западными СМИ, которые со своей очевидной ложью сейчас в “свободном падении”, и современными ложными физикой и наукой, которые тоже, в их нынешней форме в мгновение ока перестанут существовать в этом году. Именно так же, как поддельный рассказ западных СМИ рухнул в течение нескольких недель до и после избрания Трампа, невзирая на то, что они контролировали мнение масс десятилетиями, если не веками, а вместе с этим и поддельную легитимность всех созданных политических структур на западе, таких как НАТО и ЕС. Это будет очень похоже на крах Советского Союза, который я испытал на себе во время визитов в Москву, в начале 90-х, где мне довелось провести клинические исследования с РАН. Эти параллели поразительны, и это должно убедить вас, что ваш нынешний научный подход находится в уязвимом положении, и что вы не должны повторять те же ошибки, что и номенклатура в 90-х годах. Ведь именно Горбачев сказал, что “история наказывает тех, кто опаздывает”, и вместе с этой фразой призвал падение Берлинской стены и железного занавеса … и новую эру свободной и суверенной России на руинах Советского Союза, который до этого также следовал фальшивой доктрине.

Что сейчас самое важное, так это принятие русскими учеными истинной традиции континентальной Европы – Евклидовой аксиоматизации науки – и “неуступление” ныне ошибочному Англо-Саксонскому господству нецелостного, разделяющего все на несвязные части научного мышления. Русские ученые должны полностью признать логическое аксиоматическое мышление в математике, вклад в которое они решительно вносили в прошлом, и продолжают вносить сейчас.

Это ваш выбор – принять мое безоговорочное предложение, несущее необъемлемую ценность для сознания, или отвергнуть его, и остаться слепыми на всю оставшуюся жизнь, – и я надеюсь, что вы выберете правильно. В таком случае я на вашей стороне, чтобы помочь вам совершить этот гигантский скачок в сознании человека и покинуть текущее состояние “когнитивной слепоты”.

Под конец, я бы хотел вас проинформировать о своем предложении (официальном объявлении) международному научному сообществу с июля 2014 года, которое все еще в силе.

С наилучшими пожеланиями,

Доктор Георгий Станков

Ванкувер, Канада.

Later on I sent this open letter to the Russian scientists to the Russian President Vladimir Putin in the Kremlin to inform him about this new development. I met with him in the dream state a week later where he disproved, at the soul level, the lack of response from the Russian scientists who missed this great opportunity for Russia.

Президенту России Владимиру Путину,

Уважаемый господин,

в июне 2017 года я отправил открытое письмо более чем 1000 известным российским физикам, объявившим о величайшем научном открытии в истории человечества – открытии Всеобщего закона и разработке новой физико-математической теории науки. Я сделал это как друг русских людей, являющихся нашими славянскими братьями, которых мы обязаны существованию нашего болгарского государства и нации. Я приложил это письмо в качестве документа к этому письму.

Реализация этого открытия вызовет величайшую революцию человечества, и те нации, которые полностью ее поймут, станут победителями в истории человечества. Вы недавно подчеркнули важность искусственного интеллекта (AI) в одном из своих выступлений и заявили, что победителями станут те страны, которые будут развивать ИИ. Истинный ИИ может быть разработан только учеными, которые понимают новую физико-математическую теорию Всеобщего закона, в этом не должно быть никаких сомнений.

К сожалению, российские физики не отреагировали на это щедрое предложение с моей стороны. Для этого есть много причин, но все они не имеют научного, теоретического характера, но основаны на их страхах, чтобы противостоять неприкрашенной научной истине. Поэтому я хотел бы предложить вам выступить перед учеными и экспертами, чтобы доказать новую теорию Всеобщего закона и сделать свой вердикт. Когда они подтвердят его обоснованность – и другого результата не будет – я готов приехать в Россию и работать с Российской академией наук для ее полного осуществления. Это приведет к величайшему возрождению русской и славянской культуры, и вы приобретете бессмертное место в истории человечества в качестве своего политического покровителя. Очевидно, нам нужен дальновидный и неукротимый человек, подобный вам, чтобы убедить научное сообщество принять научную истину, поскольку прямое общение с российскими учеными было затруднено их глубоко укоренившимися страхами, чтобы потерять свое профессиональное положение и, таким образом, оказать плохую услугу великой русской нации.

С наилучшими пожеланиями

Д-р Георгий Станков

Ванкувер, Канада
Мюнхен, Германия,
Пловдив, Болгария

 

Foreword

While the numbers of the first two groups of people addressed in the title are asymptotically approaching the zero value in the current End Time, the number of the third, much larger group of humans will rapidly rise in the coming days during the profound change and transformation of this planet and humanity. This is the target group of the current propaedeutics into the new revolutionary Theory of Science of the Universal Law that will be the vehicle for this transformation of mankind. This group of people will become the wayshowers of the new humanity and custodians of the new ascended Gaia in all eternity.

There is no doubt that the new scientific theory of the Universal Law, as presented in its totality on this website, encompasses the entire bandwidth of all fundamental scientific, social, economic, psychological, political, gnostic and philosophical aspects and topics with which humanity has dealt throughout its long and not so glorious history in order to survive. For this reason the new science of the Universal Law will very soon become the dominant Weltanschauung (world view) of the new evolved humanity and thus a cornucopia of new revolutionary, higher-dimensional technologies that will bring infinite prosperity to all humans.

This is the divine plan of the Source for this earth and its human population and it is already a reality in all simultaneously existing upper 4D and 5D earths which we, the PAT (the Planetary Ascension Team of Gaia and humanity) and myself who had the privilege to be its captain, have been creating for a very long time.

The new theory of the Universal Law is a gift of Godhead to humanity on the verge of its glorious ascension when the end of the old dreadful era of Orion oppression meets the new beginning of the new era of enlightenment, peace, freedom and prosperity.

As all evil in this reality stems from the spiritual ignorance of the incarnated human personalities, it can be very easily eradicated when the new axiomatic scientific theory of the Universal Law based on the unity of All-That-Is is fully implemented and understood by all the people. This will streamline the collective consciousness in a yet unknown, revolutionary manner while stimulating at the same time the individual creationary potential of each and every human being. This will lead to infinite prosperity, bliss, happiness and human progress for the new mankind that will rapidly evolve to a multidimensional, transliminal, transgalactic civilisation.

The pathway to this magnificent end can only go through a full comprehension and implementation of the new theory of the Universal Law.

This publication as an ebook will remain on the front page for a further 30 days. During this time I would humbly ask all my readers and members of the PAT to send everyday at least one email with a link of this publication – and the more, the better – to any person, scientist, institution or website on the Internet that is deemed to profit from this new knowledge and perspires a modicum of genuine desire to expand his/her/its awareness. In this way we shall trigger an energetic avalanche that will usher the new era of enlightenment for mankind.

I thank, from the bottom of my heart, all my readers and the PAT for your indomitable and faithful support throughout all the years and for your participation in this, hopefully, final effort of cosmic proportions which we must perform in order to trigger the ultimate ascension leap and transmutation of Gaia and humanity in the current End Time. After all we are the ones who create and fuel the planetary ascension process guided by the Universal Law of All-That-Is.

Content

(All chapters in this book can be also found as separate publications on this website by clicking on the link.)

Introduction: The Universal Law of Nature

I. Space-Time = Energy Has only Two Dimensions (Constituents) – Space and Time

II. Wrong Space-Time Concepts of Conventional Physics and Their Revision in the Light of the New Axiomatics of the Universal Law

III: Why Modern Cosmology Is a Fake Science

Further Basic Literature on the Universal Law:

 

Introduction: The Universal Law of Nature

Scientific definition

Conventional science has not yet discovered a single law of Nature, with which all natural phenomena can be assessed without exception. Such a law should be defined as “universal”. Based on sound, self-evident scientific principles and facts, the current article analyses, from the viewpoint of the methodology of science, the formal theoretical criteria, which a natural law should fulfill in order to acquire the status of a “Universal Law”

Current concepts

In science, some known natural laws, such as Newton’s law of gravitation, are referred to as “universal”, e.g. “universal law of gravitation”. This term implies that this particular law is valid for the whole universe independently of space and time, although these physical dimensions are subjected to relativistic changes as assessed in the theory of relativity (e.g. by Lorentz’ transformations).

The same holds true for all known physical laws in modern physics, including Newton’s three laws of classical mechanics, Kepler’s laws on the rotation of planets, various laws on the behaviour of gases, fluids, and levers, the first law of thermodynamics on the conservation of energy, the second law of thermodynamics on growing entropy, diverse laws of radiation, numerous laws of electrostatics, electrodynamics, electricity, and magnetism, (summarised in Maxwell’s four equations of electromagnetism), laws of wave theory, Einstein’s famous law on the equivalence of mass and energy, Schrödinger’s wave equation of quantum mechanics, and so on. Modern textbooks of physics contain more than a hundred distinct laws, all of them being considered to be of universal character.

According to current physical theory, Nature – in fact, only inorganic, physical matter – seems to obey numerous laws, which are of universal character, e.g. they hold true at any place and time in the universe, and operate simultaneously and in a perfect harmony with each other, so that human mind perceives Nature as an ordered Whole.

Empirical science, conducted as experimental research, seems to confirm the universal validity of these physical laws without exception. For this purpose, all physical laws are presented as mathematical equations. Laws of Nature, expressed without the means of mathematics, are unthinkable in the context of present-day science. Any true, natural law should be empirically verified by precise measurements, before it acquires the status of a universal physical law. All measurements in science are based on mathematics, e.g. as various units of the SI-System, which are defined as numerical relationships within mathematics, and only then derived as mathematical results from experimental measurements. Without the possibility of presenting a natural law as a mathematical equation, there is no possibility of objectively proving its universal validity under experimental conditions.

State-of-the-art in science

From the above elaboration we can conclude that the term “Universal Law”, should be applied only to laws that can be presented by means of mathematics and verified without exception in experimental research. This is the only valid “proof of existence” (Existenzbeweis, Dedekind) of an “universal law” in science from a cognitive and epistemological point of view.

Until now, only the known physical laws fulfill the criterion to be universally valid within the physical universe and at the same time to be independent of the fallacies of human thinking at the individual and collective level. For instance, the universal gravitational constant G in Newton’s law of gravitation, is valid at any place in the physical universe. The gravitational acceleration of the earth g, also a basic constant of Newton’s laws of gravitation, applies only to our planet – therefore, this constant is not universal. Physical laws which contain such constants are local laws and not universal.

It is important to observe that science has discovered universal laws only for the physical world, defined as inanimate matter, and has failed to establish such laws for the regulation of organic matter. Bio-science and medicine are still not in the position to formulate similar universal laws for the functioning of biological organisms in general and for the human organism in particular. This is a well-known fact that discredits these disciplines as exact scientific studies.

The various bio-sciences, such as biology, biochemistry, genetics, medicine – with the notable exception of physiology, where the action potentials of cells, such as neurons and muscle cells, are described by the laws of electromagnetism – are entirely descriptive, non-mathematical disciplines. This is basic methodology of science which should be cogent to any specialist.

This conclusion holds true independently of the fact that scientists have introduced numerous mathematical models in various fields of bio-science, with which they experiment in an excessive way. Until now they have failed to show that such models are universally valid.

The general impression among scientists today is that organic matter is not subjected to similar universal laws as observed for physical matter. This observation makes, according to their conviction, for the difference between organic and inorganic matter.

The inability of scientists to establish universal laws in biological matter may be due to the fact that:

a) such laws do not exist or

b) they exist, but are so complicated, that they are beyond the cognitive capacity of mortal human minds.

The latter hypothesis has given birth to the religious notion of the existence of divine universal laws, by which God or a higher consciousness has created Nature and Life on earth and regulates them in an incessant, invisible manner.

These considerations do not take into account the fact that there is no principal difference between inorganic and organic matter. Biological organisms are, to a large extent, composed of inorganic substances. Organic molecules, such as proteins, fatty acids, and carbohydrates, contain for instance only inorganic elements, for which the above mentioned physical laws apply. Therefore, they should also apply to organic matter, otherwise they will not be universal. This simple and self-evident fact has been grossly neglected in modern scientific theory.

The discrimination between inorganic and organic matter – between physics and bio-science – is therefore artificial and exclusively based on didactic considerations. This artificial separation of scientific disciplines has emerged historically with the progress of scientific knowledge in the various fields of experimental research in the last four centuries since Descartes and Galilei founded modern science (mathematics and physics). This dichotomy has its roots in modern empiricism and contradicts the theoretical insight and the overwhelming experimental evidence that Nature – be it organic or inorganic – operates as an interrelated, harmonious entity.

Formal scientific criteria for a “Universal Law”

From this disquisition, we can easily define the fundamental theoretical criteria, which a natural law must fulfill in order to be called “Universal Law”. These are:

1. The Law must hold true for inorganic and organic matter.

2. The Law must be presented in a mathematical way, e.g. as a mathematical equation because all known physical laws are mathematical equations.

3. The Law must be empirically verified without exception by all natural phenomena.

4. The Law must integrate all known physical laws, that is to say, they must be derived mathematically from this Universal Law and must be ontologically explained by it. In this case, all known physical laws are mathematical applications of one single Law of Nature.

5. Alternatively, one has to prove that all known fundamental natural constants in physics, which pertain to numerous distinct physical laws are interrelated and can be derived from each other. This will be a powerful mathematical and physical evidence for the unity of Nature under one Universal Law, as all these constants can be experimentally measured by means of mathematical equations.

In this way one can integrate for the first time gravitation with the other three fundamental forces (see below) and ultimately unify physics. Until now conventional physics, which stipulates in the standard model, cannot integrate gravitation with the other three fundamental forces. This is a well-known fact among physicists and this circumstance discredits the whole edifice of this natural science. Physics is unable to explain the unity of Nature. This fact is not well understood by all people nowadays, because it is deliberately neglected or even covered up by all theoreticians.

The unification of physics has been the dream of many prominent physicists such as Einstein, who introduced the notion of the universal field equation, also  known as “Weltformel” (world equation) or H. Weyl, who believed physics can be developed to a universal field theory.

This idea has been carried forward in such modern concepts as Grand Unified Theories (GUTs), theories of everything or string theories, however, without any feasible success.

If such a law can be discovered, it will lead automatically to the unification of physics and all natural sciences to a “General Theory of Science”.

At present, physics cannot be unified. Gravitation cannot be integrated with the other three fundamental forces in the standard model, and there is no theory of gravitation at all. Newton’s laws of gravitation describe precisely motion and gravitational forces between two interacting mass objects, but they give us no explanation as to how gravitation is exerted as an “action at a distance”, also called “long-range correlation”, or what role photons play in the transmission of gravitational forces, given the fact that gravitation is propagated with the speed of light, which is actually the speed of photons.

If this hypothetical “Universal Law” also holds true for the organisation of human society and for the functioning of human thinking, then we are allowed to speak of a true “Universal Law”. The discovery of such a law will lead to the unification of all sciences to a pan-theory of human knowledge. This universal theory will be, in its verbal form presented as a categorical system (Aristotle), without contradictions, that is to say, it will follow the formalistic principle of inner consistency.

From a mathematical point of view, the new General Theory of Science, based on the Universal Law, will be organised as an axiomatics. The potential axiomatisation of all sciences will be thus based on the “Universal Law” or a definition thereof. This will be the first and only axiom, from which all other laws, definitions, and conclusions will be derived in a logical and consistent way. All these theoretical statements will then be confirmed in an experimental manner.

These are the ideal theoretical and formalistic criteria, which a “Universal Law” must fulfill. The new General Theory of Science based on such an “Universal Law” will be thus entirely mathematical, because the very Law is of mathematical origin – it has to be presented as a mathematical equation.

In this case all natural and social sciences can be principally presented as mathematical systems for their particular object of investigation, just as physics today is essentially an applied mathematics for the physical world. Exact sciences are therefore “exact”, because they are presented as mathematical systems.

The foundation crisis of mathematics

(see Wikipedia: Grundlagenkrise der Mathematik)

This methodological approach must solve one fundamental theoretical problem that torments modern theory of science. This problem is well-known as the “Foundation Crisis of mathematics. Mathematics cannot prove its validity with its own means. As mathematics is the universal tool of presenting Nature in all exact physical disciplines, the Foundation Crisis of mathematics extends to all natural sciences. Social sciences do not claim any universal validity, as they cannot be mathematically expressed. Therefore, the Foundation Crisis of mathematics is the Crisis of Science.

Although this crisis should be basic knowledge to any scientist or theoretician, present-day scientists are completely unaware of its existence. Hence their total agnosticism with respect to the essence of Nature.

This ignorance is difficult to explain, as the foundation dispute in mathematics, known in German as Grundlagenstreit der Mathematik, has dominated the spirits of European mathematicians during the first half of the 20th century. The current ignorance of scientists about this crisis of science stems from the fact that mathematicians have not yet been able to solve the foundation crisis of mathematics and have swept it with a large broom under the carpet of total forgetfulness.

Mathematics is a hermeneutic discipline and has no external object of study. All mathematical concepts are “objects of thought” (Gedankendinge). Their validity cannot be verified in the external world, as this is the case with physical laws. Mathematics can only prove its validity by its own means.

This insight emerged at the end of the 19th century and was formulated for the first time as a theoretical programme by Hilbert in 1900. By this time, most of the mathematicians recognized the necessity of unifying the theory of mathematics through its complete axiomatisation. This was called “Hilbert’s formalism“. Hilbert, himself, made an effort to axiomatize geometry on the basis of few elementary concepts, such as straight line, point, etc., which he introduced in an apriori manner.

The partial axiomatisation of mathematics gained momentum in the first three decades of the 20th century, until the Austrian mathematician Gödel proved in 1931 in his famous theorem that mathematics cannot prove its validity by mathematical, axiomatic means. He showed in an irrevocable manner, that each time, Hilbert’s formalistic principle of inner consistency and lack of contradiction is applied to the system of mathematics – be it geometry or algebra – it inevitably leads to a basic antinomy (paradox). This term was first introduced by Russell, who challenged Cantor’s theory of sets, the basis of modern mathematics. Gödel showed by logical means that any axiomatic approach in mathematics inevitably leads to two opposite, excluding results.

The continuum hypothesis

See also: Continuum hypothesis

Until now, no one has been able to disprove Gödel’s theorem, which he further elaborated in 1937. With this theorem the foundation crisis of mathematics began and is still ongoing as embodied in the Continuum hypothesis, notwithstanding the fact, that all mathematicians after Gödel prefer to ignore it. On the other hand, mathematics seems to render valid results, when it is applied to the physical world in form of natural laws.

This observation leads to the only possible conclusion.

The discovery of the “Universal Law”

The solution of the continuum hypothesis and the elimination of the foundation crisis of mathematics can only be achieved in the real physical world and not in the hermeneutic, mental space of mathematical concepts. This is the only possible “proof of existence” that can eliminate the Foundation Crisis of mathematics and abolish the current antinomy between its validity in physics and its inability to prove the same in its own realm.

The new axiomatics that will emerge from this intellectual endeavour will no longer be purely mathematical, but will be physical and mathematical at once. Such an axiomatics can only be based on the discovery of the “Universal Law”, the latter being at once the origin of physics and mathematics. In this case, the “Universal Law” will be the first and only primary axiom, from which all scientific terms, natural laws and various other concepts in science will be axiomatically, i.e. consistently and without any inner contradiction, derived. Such axiomatics is rooted in experience and will be confirmed by all natural phenomena without exception. This axiomatics is the foundation of the General Theory of Science, which the author developed after he discovered the Universal Law of Nature in 1994.

References:

  1. Dr. Georgi Stankov, Stankov’s Universal Law Press
  2. Tipler, PA. Physics for Scientists and Engineers,1991, New York, Worth Publishers, Inc.
  3. Feynman, RP. The Feynman Lectures on Physics, 1963, California Institute of Technology.
  4. Peeble, PJE. Principles of Physical Cosmology, 1993, Princeton, Princeton University Press.
  5. Berne, RM & Levy MN, Physiology, St. Louis, Mosby-Year Book, Inc.
  6. Bourbaki, N. Elements of the History of Mathematics, 1994, Heidelberg, Springer Verlag.
  7. Davis, P. Superstrings. A Theory of Everything?, 1988, Cambridge, Cambridge University Press.
  8. Weyl, H. Philosophie der Mathematik und Naturwissenschaft, 1990, München, Oldenbourg Verlag.
  9. Barrow, JD. Theories of Everything. The Quest for Ultimate Explanation, 1991, Oxford, Oxford University Press.
  10. Stankov, G. Das Universalgesetz. Band I: Vom Universalgesetz zur Allgemeinen Theorie der Physik und Wissenschaft,1997, Plovidiv, München, Stankov’s Universal Law Press.
  11. Stankov, G. The Universal Law. Vol.II: The General Theory of Physics and Cosmology, 1999, Stankov’s Universal Law Press, Internet Publishing 2000.
  12. Stankov, G. The General Theory of Biological Regulation. The Universal Law in Bio-Science and Medicine, Vol.III, 1999, Stankov’s Universal Law Press, Internet Publishing 2000.

 

I. Space-Time = Energy Has only Two Dimensions (Constituents) – Space and Time

I.1. Systems of Measurements and Units in Physics (Part 1)

“The laws of physics express relationships between physical quantities, such as length, time, force, energy and temperature. Thus, the ability to define such quantities precisely and measure them accurately is a requisite of physics. The measurement of any physical quantity involves comparing it with some precisely defined unit value of the quantity.“ (1)

This is the departing point of any intellectual effort in physics. In this essay I shall explain why the “ability to define“ physical quantities appears to be the “Achilles heel“ of modern physics.

I shall also explain why physicists have failed to grasp that energy = space-time = All-That-Is, which is the very object of their science, has only two dimensions – space and time – and not six fundamental dimensions as they currently claim referring to the SI system. This is the third biggest blunder in physics that is closely linked to their inability to understand epistemologically their own definition of mass as energy relationship which is a dimensionless number. This will be the topic of my next publication. The second one is to confound the basic physical quantity of electromagnetism and quantum mechanics, charge, which is in fact a synonym (pleonasm) of geometric area. This blunder has been thoroughly revealed in my pivotal publication:

The Greatest Blunder of Science: „Electric Charge“ is a Synonym for „Geometric Area“.

which I will present in a simple popular-scientific version later on for the sake of completion of my discussion on all scientists’ blunders in physics and related disciplines.

In many ways, the new Physical and Mathematical Axiomatics and Theory of the Universal Law is a painstaking forensic exploration of the infinite blunders physicists and theoreticians have accumulated in less than four centuries since Galileo Galilei conducted his famous experiment on gravitation and laid the foundation of this natural science. Let us begin our methodological forensics with the epistemological background of the SI system which is in the core of this experimental discipline as not a single experiment can be conducted in physics without employing this system of basic SI units and physical quantities. 

Everybody with a modicum of physical knowledge should know that the mathematical (symbolic) expression of any physical quantity consists of a number, which is a relationship between the magnitude of the assessed quantity and the arbitrarily chosen unit for this quantity, and the name of the unit. If a distance, e.g. the length of a soccer field, is 100 times longer than 1 metre (length unit of choice), we write for it “100 metres“. The magnitude of any physical quantity includes both a number and a unit. This presentation is a pure convention.

All physical quantities can be expressed in terms of a small number of fundamental quantities and units. Most of the quantities in physics are composed quantities within mathematical formalism. This is generally acknowledged. For example, speed is expressed as a relationship of a unit of length (metre) and a unit of conventional time (second) v=s/t (m/s).

The most common physical quantities, such as force, momentum, work, energy and power, which are basic to many physical laws, can be expressed with only three fundamental quantities – length, conventional time and mass. The set of all standard units in physics is called “Système Internationale“ or SI system. It consists of a few basic quantities and their corresponding units, from which all other quantities and units can be derived by applying the method of mathematical formalism (method of definition = method of measurement). These are:

  • (1) length (metre),
  • (2) conventional time (second),
  • (3) mass (kilogram),
  • (4) temperature (kelvin),
  • (5) amount of substance, also called “the mole“ (mol),
  • (6) current (ampere) and
  • (7) charge (coulomb) (2).

The last two quantities are defined in a circular manner, so that they can be regarded as one quantity.

A major objective of this disquisition is to present theoretical and experimental evidence that these six fundamental quantities are axiomatically derived from the two constituents of space-time – space and time. I will begin with the first two quantities in this essay and will discuss the other four in follow-up publications. As all the other conventional quantities used in physics are known to be derivatives of these few quantities, this is also true for any new physical quantity.

This essay will render the fundamental proof that space-time has only two constituents, quantities, dimensions (synonyms)space and time. This proof brings about the greatest simplification in modern physics which is now fragmentalized by inadequate definitions the epistemology of which has never been truly worked out in an axiomatic and logical manner. This I define in the new theory of the Universal Law as “applied mathematical formalism” which is another word for the new Integrated Physical and Mathematical Axiomatics of the Universal Law.

By way of introduction, we begin with the definition of the SI units of space and conventional time, metre and second. The definition of these quantities is at the same time the method of measurement of their units, which is applied mathematics and/or geometry. The standard unit of length ([1d-space]-quantity), 1 metre (1 m), was originally indicated by two scratches on a bar made of platinum-iridium alloy kept at the International Bureau of Weights and Measures in Sèvres, France.

This is, however, an indirect system (a surrogate) of standard length. The actual system of comparison is the arbitrarily chosen distance between the equator and the North Pole along the meridian through Paris, which is roughly 10 million metres. Thus the earth is the initial, real reference system of distance – the metre is an anthropocentric surrogate.

As this gravitational system of reference length was found to be inexact, the standard metre is now arbitrarily defined with respect to the speed of light. This quantity is defined in the new Axiomatics of the Universal Law as [1d-space-time] of the photon level: it is the distance travelled by light in empty (?) space during a time of 1/299,792,458 second. This makes the velocity of the photon level c = 299,792,458 m/s. The photon level, of which the visible light is a narrow spectrum (a system), has a constant velocity c.

This has been deduced in the new Axiomatics from the primary term of human consciousness – energy = space-time = All-That-Is – and confirmed by the theory of relativity and physical experience. The universal property of all levels of space-time – their constant specific velocity, also presented as a specific action potential EA being the universal manifestation of energy exchange – is intuitively considered in the conventional definition of the SI unit of length, 1 metre. So far, this fact has not been comprehended by all theoreticians.

Through the standard definition of space and conventional time (see below), the velocity of the photon level is voluntarily selected as the universal reference system of space-time, to which all other physical systems are set in relation (method of measurement).

The standard definition of the length unit reveals a fundamental epistemological fact that has entirely evaded the attention of physicists. The present standard definition of 1 metre by using the speed of light gives the impression of being clear-cut and unambiguous. In fact, this is not the case. The definition of this length unit is based on the principle of circular argument and involves the definition of the time unit, 1 second. If the latter unit could be defined in an a priori manner, all would be well.

When we look at the present definition of the second, which is at the same time the only possible definition of the quantity “conventional time t“, we come to the conclusion that this is not possible. The standard unit of time, being originally defined as 1/60×1/60×1/24 of the mean solar day, is now defined through the frequency of the photons emitted during a certain energy transition within the caesium atom, which is f = 9,192,631,770 per second.

In this case, we have again a concrete photon system with a more or less constant frequency, which has been arbitrarily selected as a reference system of time measurement. From this real reference system of space-time, an anthropocentric surrogate – the clock with the basic unit of 1 second – has been introduced. The conventional time of all events under observation is then compared with the time of the clock. Thus the measurement of time in physics and daily life is in reality:

a comparison of the frequency of events that are observed with the frequency (periodicity) of a standard photon system.

The method of definition and measurement of the quantity “conventional time t“ and its unit, 1 second, is therefore a circular comparison of actual periodicities. Such quantities are pure (dimensionless) numbers that belong to SP(A) (for further information see here). However, any experimental measurement of photon frequency involves the measurement of length – the actual quantity of time cannot be separated from the measurement of the wavelength λ, which is an actual [1d-space]-quantity.

Therefore, the two constituents of space-time cannot be separated in real terms because they are canonically conjugated. The equation of the speed of light c = λ f is intrinsic to any measurement of photon frequency and wavelength. Neither wavelength, nor frequency, can be regarded as a distinct entity – they both behave reciprocally and can only be expressed in terms of space-time:

c =  λ f = [1d-space] f = [1d-space-time]p 

The wavelength and frequency of photons are the actual quantities of the two constituents, space and time, of this particular level of space-time. The measurement of any particular length [1d-space] or time f = 1/t in the physical world is, in fact, an indirect comparison with the actual quantities of space and time of a photon system of reference. The introduction of the SI system obscures this fact.

We conclude:

The one-dimensional space-time of the photon level [1d-space-time]p is the universal reference system of length s = [1d-space] and conventional time t = 1/f, and their units, 1 metre and 1 second. The SI system is an anthropocentric surrogate of this real reference system and can be easily eliminated. In fact, it should be eliminated in theoretical physics as it only obscures the understanding of energy = space-time = physical world = All-That-Is. This is done in the new Physical and Mathematical Theory of the Universal Law.

This conclusion is of immense importance – I have shown in Volume II that the theory of relativity uses the same intrinsic reference system to assess relativistic space and time of kinetic objects. Lorentz transformations, with which these quantities are presented, are relationships (quotients) of the space-time of the object in motion as assessed by v with the space-time of the photon level as assessed by c. These are formalistic constructions within the system of mathematics. I have proved that these quotients belong to the probability set 0≤P(A)≤1 and can be expressed in terms of statistics as summarized in the new symbol SP(A).

From this survey, it becomes evident that the physical quantities, length and conventional time, and their basic units, metre and second, are defined in a circular manner by the arbitrary choice of a real reference system of space-time – in this particular case, of photon space-time. The SI system is an epiphenomenon; it is a human convention and can be substituted by any other system through the introduction of conversion factors or better eliminated. This also applies to the other four basic quantities and their units, which will be discussed in separate publications.

Therefore, the definition of any physical quantity cannot be separated from its method of measurement, which is mathematics. The latter is, at the same time, its method of definition. Physical quantities as defined in physics do not have a distinct existence in the real world, but are intrinsically linked to their mathematical definition, which is a product of abstract human consciousness. Mathematics is a hermeneutic discipline without any external object. As any Axiomatics is also a product of human consciousness, the derivation of all known physical quantities from the primary term is essentially a problem of correct organisation of physical and mathematical thinking and not a problem that should be resolved through explorative empiricism.

Thus every method of measurement and every definition of a physical quantity are based on the principle of circular argument. This epistemological result of our methodological analysis of physical concepts is of universal character. The explanation is very simple: as every physical quantity reflects the nature of space-time as a U-subset thereof, its definition has to comply with the principle of last equivalence of the primary term which postulates that all terms that assess the primary term are equivalent independently of the choice of the particular words.

This fundamental axiom of the new Axiomatics is intuitively perceived by the physicist’s mind and is put forward in all subsequent definitions of physical quantities. As these terms are of secondary character – they are parts of the Whole – the actual principle applied in physical definitions nowadays is circulus viciosus. The vicious character of this principle when applied to the parts and the simultaneous negligence of the primary term explains why the existence of the Universal Law has been overlooked in the past.

Physics has produced in a vicious circle a large number of concepts, which are either synonyms or partial perceptions of the primary term. Unfortunately, they have been erroneously regarded as distinct physical entities. This has given rise to the impression that these physical quantities really exist. In fact, they only exist as abstract concepts in the physicist’s mind and are introduced in experimental research through their method of measurement which is mathematics.

Space-time is termless – it is an a priori entity; the human mind, on the other hand, is a local, particular system of recent origin that has the propensity to perceive space-time and describe it in scientific terms. Science originally means „knowledge“, but it also includes the organisation of knowledge – every science is a categorical system based on the primary concept of space-time. Only the establishment of a self-consistent Axiomatics which departs from the primary term of space-time leads to an insight that there is only one Law of Nature and allows a correct organisation of human knowledge on the basis of present and future empiric data.

Notes:

1. Textbook on Physics, PA Tipler, p. 245 (I have used an earlier edition of this textbook, so that the pages may have changed. Note, George)

2. Some authors believe that candela (cd) is also a basic unit, but this is a mistake.

 

I.2. Mass and Mind: Why Mass Does Not Exist – It Is an Energy Relationship and a Dimensionless Number (Part 2)

Mass does not exist – it is an abstract term of our consciousness (object of thought) that is defined within mathematics. The origin of this term is energy (space-time).

Mass is a comparison of the space-time (energy) of any particular system Ex to the space-time of a reference system Er (e.g. 1 kg) that is performed under equal conditions (principle of circular argument): m = Ex / E= SP(A), when g = constant, which is the case most of the time on this planet at the same altitude. When this comparison is done for gravitation, it is called “weighing”. The ratio that is built is a static relationship that does not consider energy exchange, although it is obtained from an energy interaction such as weighing. This explains the traditional presentation of mass as a scalar (for more information on scalars see here).

We can call the space-time of a reference system “1 kg“ or “1 space-time“ without changing anything in physics. In the new Axiomatics we ascribe mass for didactic purposes to the new term “structural complexityKs . When f = 1,

m = Ks = SP(A)[2d-space] = SP(A).

In this case [2d-space] = SP(A) = 1 is regarded as a spaceless “centre of mass“ within geometry, which is a pure abstraction of the human mind as all real objects have a volume (3d-space) and therefore cannot be spaceless.

The definition of mass in classical mechanics is as follows:

Mass is an intrinsic property of an object that measures its resistance to acceleration.“ (1)

The word “resistance“ is a circumlocution of reciprocity: m ≈ 1/a. This definition creates a vicious circle with the definition of force in Newton’s second law:

A force is an influence on an object that causes the object to change its velocity, that is, to accelerate“: F ≈ a. (2)

From this circular definition, we obtain for mass m ≈ 1/F. If we consider the number “1“ as a unit of force, Fr = 1 (reference force), we get for the mass m = Fr /F. This is the vested definition of mass as a relationship of forces. As force is an abstract U-subset of energy F = E/s = E, when s = 1 unit, e.g. 1 m, we obtain for mass a relationship of two energies:

m = Er /E = SP(A).

We conclude:

The physical quantity mass is, per definition and method of measurement, a relationship of two energies. The gravitational energy relationship is with 1 kg which is the SI reference system with respect to earth’s gravitation that can be replaced by any other reference system. The definition of mass is equivalent to the definition of absolute time f = 1/t = E/EA  = SP(A). In fact, it is a dimensionless number as is the case with all physical quantities according to their method of definition and measurement within the SI system which is mathematics (see also here).

The definition of mass follows the principle of circular argument. If we rearrange m = 1/a to ma = 1 = F = E = reference space-time (Newton’s second law), we obtain the principle of last equivalence. This elaboration of the definition of mass proves again that mathematics is the only method of definition and measurement of physical quantities.

This knowledge is basic for an understanding of various mass measurements in physics that have produced a number of fundamental natural constants. I have derived some of these constants by applying the Universal Equation as can be seen at one glance on Table 1. The definition of relativistic mass follows the same pattern. I have discussed this quantity extensively in conjunction with the traditional concept of space-time in the theory of relativity (see chapter 8.3 & equation (43) in Volume II).

The equivalence between the method of definition of physical quantities and the method of their measurement, being mathematics in both cases, can be illustrated by the measurement of weight F = E (s = 1). The measurement of weight is an assessment of gravitation as a particular energy exchange. The instruments of measurement are scales. With scales we weigh equivalent weights  Fr = Fx  at equilibrium; as s = 1 = constant, hence Er = Ex . This is Newton’s third law expressed as an energy law according to the axiom of conservation of action potentials (see Axiomatics).

The equilibrium of weights may be a direct comparison of two gravitational interactions with the earth, or it may be mediated through spring (elastic) forces. As all systems of space-time are U-subsets, the kind of interim force is of no importance: any particular energy exchange, such as gravitation, can be reduced to an interaction between two interacting entities (axiom of reducibility). I have reduced the entire philosophy behind the current definitions of physical laws in physics to three fundamental axioms in terms of epistemology, i.e., in terms of human cognition and with respect to the Universal Law. For further information read the new Axiomatics.

Let us now consider the simplest case when the beam of the scales is at balance. In this case, we compare the energy Er (reference weight) and Ex (object to be weighed), as they undergo equivalent gravitational interactions with the earth (equal attraction). The equivalence of the two attractions is visualized by the balance, e.g. by the horizontal position of the scale beam. This is an application of the principle of circular argumentbuilding of equivalence and comparison, which is by the way a practical application of any mathematical equation.

Please observe that humans only employ mathematics based on mathematical equations and have no functional applied mathematics based on inequalities (≤, ≥). When these symbols are used in physics, they always lead to nonsensical conclusions, which are bluntly wrong. This is very important to know.

All physical experiments assess real space-time interactions according to the principle of circular argument. This also holds for any abstract physical quantity, with which any particular energy interaction is described. All physical quantities in physics are abstract mathematical definitions and have no real existence. There is only energy (energy exchange) in All-That-Is.

Let us now describe both interactions, the reference weight Er and the object to be weighed Ex , with the earth’s gravitation according to the axiom of reducibility. For this purpose, we express the two systems in the new space-time symbolism. The space-time of the earth EE is given as gravitational potential (long-range correlation, LRC):

EE  = LRCG = UG = [2d-space-time]G.

The space-time of the two gravitational objects, Er and Ex, is given as mass (energy relationship):  Er = mr = SP(A)r and Ex = mx = SP(A)x. As the two interactions are equivalent when the scales are at balance, we obtain the Universal Equation for each weighing:

E = ErEG =  ExEG = SP(A)r[2d-space-time]G = SP(A)x[2d-space-time]G 

We can now compare the two gravitational interactions by building a quotient within mathematics:

K = SP(A) = SP(A)x[2d-space-time]: SP(A)r[2d-space-time]G = 

= SP(A)x/SP(A)= mx /mr = (x) kg

We obtain the Universal Law as a rule of three. One can use the same equation to obtain the absolute constants – the coefficients of vertical and horizontal energy exchange – in the new theory of the Universal Law (see Volume II). “Weighing“ is thus based on the equivalence of the earth’s gravitation for each mass measurement, i.e., UG = g = constant. If UG were to change from one measurement to another, we would not be in a position to perform any adequate weighing, precisely, we would not know what the energy relationships (masses) between distinct objects really are.

Any assessment of space-time requires, firstly, the building of equivalences (as mathematical equations) and, secondly, the comparison between two identical entities. “Identical” means that we can only compare physical quantities that are the same in terms of their mathematical definition and method of measurement but have a different value. This is the principle of circular argument as the only operational method of physics and mathematics. One can use the same principle to define a level as an abstract U-subset of space-time, consisting of equivalent systems or action potentials.

The principle of circular argument is the only cognitive principle of human consciousness (3).

Without it, the world would be incomprehensible. The above statement is a tautology – there is no possibility to distinguish between “cognition“ and “consciousness“. Such tautologies reveal the closed character of space-time – the principle of circular argument is the universal operation of the mind with respect to the primary term.

The above equation exemplifies as to how one obtains the “certain event“ which is a statistical term in physics: mr =  m= 1 kg = SP(A) = certain event = 1. If mr = SP(A) ≥ 1, the “1 object“ to be weighed is equivalent to n (kg), that is, 1 = n (n = all numbers of the continuum = ∞). Within mathematical formalism we can define arbitrarily any number of the continuum, which stands for a system of space-time, as the certain event and assign it the number “1“although it may have n elements. This mathematical procedure is fairly common in physics but has not been comprehended by all physicists in terms of philosophy of mathematics as an abstract hermeneutic discipline without any external object.

The SI unit Mole is a Dimensional Number That Pertains to Time f

We can show that the basic quantity “1 mole“ is defined in the same way. Any definition of physical units, e.g. SI units, follows this pattern. The standard energy system of 1 kg contains, for instance, 1000 g, 1 000 000 mg and so on (4). We can build an equivalence between the certain event „1“ and any other number of n, such as 1000 or 1 000 000 by adding voluntary names of units to these numbers, which stand for real space-time systems: e.g. 1 kg = 1000 gram. Thus the primary idea of space-time as conceptual equivalence is introduced in mathematics not through numbers (objects of thought), which are universal abstract signs that can be ascribed to infinite real objects, but through descriptive terms (words), such as “kilogram“, “gram“ and “milligram“. The latter are aggregates (assemblies) of n elements, whereas the elements are also arbitrarily defined within mathematics as identical by the principle of circular argument as to build this set of elements as an abstract system or level of space-time.

Because any discrimination of space-time = All-That-Is takes place first in the mind and is only then projected onto the external world where it can be validated in experiments. This holds true for any abstract physical quantity within the SI system as well as for all elementary particles in quantum mechanics which are first defined within mathematics (see Bohr’s atomic model in Volume II).

In modern esotericism this basic truth is explained in a somewhat simplistic manner by saying that humans are the creators of their reality which is All-That-Is. Every human being creates and inhabits its own universe, but then these same light workers have great difficulties to explain how these subjective realities merge /intercept with each other as to create the consensual reality of the current 3D holographic model. Obviously there is more to that and the explanation can only come from a philosophical disquisition of the foundations of mathematics and physics as this is done in the new Axiomatics and Theory of the Universal Law

Back to the terms in human language that are attributed to numbers when they assess real systems of space-time. These descriptive terms establish the link between hermeneutic mathematics and the real world. Such terms are of precise mathematical character – when we apply the principle of circular argument to the words “kilogram“ and “gram“, we obtain a dimensionless quotient: kilogram/gram = 1000 that belongs to the continuum. From this we conclude that human language can be “mathematized“ when the individual words, respectively their connotations, are axiomatically defined from the primary term by the principle of circular argument.

Instead of the voluntary units, kilogram and gram, we can choose the space-time of the Planck’s constant h as a reference unit of mass and call it the basic photon (see also Table 1):

E = h/c² = mp  = SP(A) = 1

by comparing it with itself. In this case, we follow the pattern of the SI system, which uses photon space-time as a reference system for the basic units of space and time (see Part I).

We conclude:

As mass is a space-time relationship, that is, it only contains space and time, we should also use photon space-time as the initial reference system for the definition of mass and eliminate the present reference system of earth’s gravitation, given as 1 kg. Since these reference systems are transitive, we can compare the space-time of the basic photon h with the space-time of the standard SI system of mass, called 1 kg, and will obtain a different quotient or dimensionless number but the relations between the energies of the systems given as mass will remain the same (the Universal Law as a rule of three).

We can then express the mass of all material systems, for instance, the mass of all elementary particles and macroscopic gravitational objects, in relation to the mass of h in kg and obtain the same mass values as assessed by direct measurements (see Table 1). The reason, why these results agree, is that mathematics is the only method of definition and measurement of mass or any other quantity.

I assume that my readers already grasp from this and my previous publication what a profound revolution this simple suggestion brings about in present-day physics, which until now claims that “photons do not have a mass”. That is why physicists cannot account for more than 90% of the theoretically calculated mass in the universe according to their cosmological models and define it in a rather obscure esoteric manner as “dark matter”. This statement alone has reduced modern cosmology to “fake science”.

Back to mathematics – the mother-father of all science. Mathematics is a transitive axiomatic system due to the closed character of space-time – it works both ways. One can either depart from the definition of mass and then confirm it experimentally in a secondary way or assess mass as a space-time relationship of real systems and then formalize this measurement into a general definition of this quantity. In both cases, the primary event is the mathematical definition according to the principle of circular argument.

When we set E = m= h/c² = 1  and m= (h/c²)×1 kg, the space-time of Planck’s constant h can be chosen as the initial reference system of mass measurement. This is a consequent step based on the knowledge that space-time has only two dimensions, the initial reference frame of which is photon space-time (see Part I) All other units can be derived from these two units.

This interdependence can be easily demonstrated by presenting the Lorentz factor of relativity, assessing the relativistic changes of space and time in electromagnetism and the theory of relativity (Volume II, chapters 8.2 & 8.3), as the universal equation of mass measurement. I will refrain from giving this equation here as not to make this article unduly complicated but you can find it as equation (43) on page 150, Volume II.

Departing from this equation, I have proved (chapter 8.4, Vol. II)  that mass at rest is a synonym of the certain event, while relativistic mass is a synonym of Kolmogoroff’s probability set (0,1). In this way I have accomplished the full integration of all the basic physical disciplines within mathematics which was impossible before that as mathematical theory still suffered under its foundation crisis from the beginning of the 20th century which I finally resolved in 1995.  This must be considered the second most important theoretical achievement on my part in the context of the discovery of the Universal Law, first in biological (organic) matter and then in physical (inorganic) matter.

As we see, physics can be fairly simple in terms of knowledge when the concepts of this discipline are axiomatically arranged. The above equations show that we can present space-time one-, two-, or n-dimensionally without affecting the basic conclusion of our axiomatics:

The only thing we can do in physics is to compare the space-time of one system or a quantity thereof with that of another system.

The practical consequence of this conclusion is the elimination of the SI system as All-That-Is has only two dimensions. From a didactic point of view, this refrain should be as often reiterated as that in Ravel’s Boléro, so that even the most conservatively thinking, recalcitrant physicist will finally grasp it.

Notes:

1. Textbook on Physics, PA Tipler, p.80. (This reference is from an earlier edition of this textbook and the page numbers may have changed in this latest edition.)

2. Textbook on Physics, PA Tipler, p.80.

3. This physical conclusion is of paramount importance for human gnosis and eschatology. These aspects are covered in a separate book on esoteric Gnosis.

4. One dollar as the certain event, 1$ = SP(A) = 1, is equivalent to 100 cents and 1 million dollars as another certain event, 1 million = SP(A) = 1, is equivalent to 1 000 000 $: 1 = n = 1 000 000. Mathematics is based on human free will and mathematical free will means the right and ability of human consciousness to assign any number to any system of space-time and vice verse.

 

I.3. Mass, Matter and Photons – How to Calculate the Mass of Matter From the Mass of Photon Space-Time (Part 3)

As the quantity “mass“ is a space-time relationship, there are infinite masses in space-time. We shall derive some basic, constant space-time relationships, which are conventionally described as “natural constants“. Thus we shall prove that space-time is a closed entity so that we can derive any constant mass from any other constant mass. The same is true for the magnitude of any other quantity of an actual space-time relationship. As such constants are part of distinct physical laws, which until now could not be integrated, we shall demonstrate how physics can be unified (see Table 1).

For this purpose we shall employ the new space-time symbolism and neglect the SI units that obscure our physical knowledge. The non-mathematical term “kilogram“ will be ascribed to the final result, so as to make clear that we have selected the space-time of 1 kilogram as a real reference system. The reason for this is the use of conventional data from the literature, which are given in SI units.

We begin with the mass mp of Planck’s constant h, which is a space-time relationship of this photon system with the SI unit 1 kg. In the new axiomatics, we call Planck’s constant h the “basic photon“. This smallest constant amount of photon energy is the elementary action potential of the photon level EA = h. The energy of any photon (electromagnetic wave) as a system of this level can be assessed by applying the Universal Equation:

E = Ef = nhf =  SP(A)[1d-space-time][1d-space] f

where n is any number of the continuum. This proves that Planck’s equation is an application of the Universal Law for photon space-time. Each action potential can be regarded as a system of space-time. This also holds for the basic photon: h = E = SP(A)[2d-space-time]p. When we set its space-time in relation to photon space-time  Ep = c2 = [2d-space-time]p = LRCp, we obtain the space-time relationship SP(A) of the elementary action potential “basic photon” as mass in kg:

mp = h/c2 = hμo /4πK = hμoεSP(A)[2d-space-time]: [2d-space-time]=

mp = SP( A) = 0.737 ×10-50 kg

The constant mp is the mass of the basic photon. It is a new fundamental constant obtained within mathematics; it assesses the constant space-time of this real photon system in relation to the real surrogate SI system “1 kg“, according to the principle of circular argument. All systems have a constant space-time because they contain the whole as an element and express its properties – in this case, the constancy of space-time. The space-time of any system can only be assessed in comparison with the space-time of another system (principle of circular argument). Such space-time relationships are always constant. That is why this basic constant is central to the integration of all natural physical constants, of all physical laws in which they appear, and subsequently of all physical disciplines as illustrated on one page with Table 1.

The above equation illustrates this principle, which is also basic to the Law: f = SP(A) = E/E = m. As previously noted, mass can be regarded as time f within mathematical formalism (freedom of mathematical consciousness). The time fp  and space  λof the basic photon are thus natural constants:

f = 1s-1  and

 λA = c/fp = [1d-space-time]p f = [1d-space]p  = 3×108  m.

In my previous articles on the SI system I have shown that we can alternatively select the wavelength  λA of the basic photon as a reference unit of length and compare the anthropocentric length unit of 1 m with it. In this case we obtain the conversion factor:

A =  λA /1 m = 2.99792458 ×108 

as a dimensionless quotient. As space-time is closed, we can depart from any magnitude and acquire any other magnitude and vice versa. The same is true for mathematics – continuum is space-time. We can obtain any number from any other number as a relationship. All the constants I have derived in the new physics of the Universal Law belong to the continuum – they are dimensionless numbers (quotients).

The equation of the basic photon is a new, key derivation of the Universal Law. It integrates five fundamental physical constants by introducing the new constant mp. These are:

  • speed of light c,
  • permeability of free space μo 
  • permettivity of free space εo
  • Coulomb’s constant k and –
  • Planck’s constant h (see Table 1).

These constants are part of distinct laws, such as Coulomb’s law of electricity, Maxwell’s equations of electromagnetism, Planck’s equation of quantum mechanics and Einstein’s mass-energy-equation of the theory of relativity. So far, these laws could not be integrated. Thus a single application of the Universal Law (the mass of the basic photon) integrates such heterogeneous physical disciplines as classical mechanics, electromagnetism, quantum mechanics and the theory of relativity. This is, indeed, a remarkable result that demonstrates the superiority of the new theory over conventional physics.

In this process of physical integration, we have already derived Planck’s equation (see above) and Einstein’s law of energy from the Universal Equation. In Volume II I have proved that the other laws which are integrated in the equation of the basic photon are also applications of the Universal Law. This fact is anticipated by the above equation, which is a synthesis of the aforementioned laws.

The five constants are abstract quantities of photon space-time and contain far more information about this level than is generally assumed. I discuss these constants in Volume II, section “Electromagnetism” where I present for the first time the actual epistemological background of the two basic constants, μo and  εo (see chapter 6.3).

Mass is a space-time relationship of systems, and space-time is a unity. We can depart from the basic photon and obtain the space-time E of any elementary particle of matter as “mass“: E/h = SP(A) = m and vice versa. I have done this for electron, proton and neutron as can been seen in Table 1. These elementary particles of matter are open systems and exchange energy – we can also speak of mass – with the photon level: they absorb and emit photons. There are several laws that describe this energy exchange (see thermodynamics). I have departed from the universal equation as a rule of three and have made use of the Compton wavelengths of the particles, which are known natural constants.

The masses of the elementary particles are fundamental natural constants that can be experimentally measured. They are basic not only to quantum mechanics, which is unable to explain them, but also to gravitation. This is what the famous physicist and Nobel-Prize winner Richard P. Feynman writes about the masses of elementary particles:

„So not only have we no experiments with which to check a quantum theory of gravitation, we also have no reasonable theory. Throughout the entire story there remains one especially unsatisfactory feature: the observed masses of the particles, m. There is no theory that adequately explains these numbers. We use the numbers in all our theories, but we don’t understand them – what they are, or where they come from. I believe that from a fundamental point of view, this is a very interesting and serious problem.“ (R.P. Feynman, QED, Penguin, 1985, p. 151-52)

The answer to this disturbing question, as put forward by the founder of QED (quantum electrodynamics) is fairly simple in the light of the new axiomatics: space-time is continuum (primary axiom) and all constant numbers, which physicists obtain from experiments, are constant space, time, or space-time relationships that are introduced by themselves through mathematical formalism. The latter is the method of definition and measurement of all physical quantities as abstract U-subsets of the primary term.

Although the mass of particles is initially defined within mathematics, this quantity can be experimentally verified. This holds true for all abstract physical quantities of space-time and brings about the unity of mathematics and physical world and the resolution of the foundation crisis of mathematics.

One can illustrate this basic insight with the classical experiment of Compton scattering that assesses the vertical energy exchange between electron level and photon level. I will not present the derivation in this article but it can be found in Volume II, p. 154.

Mass can be regarded as a magnitude that gives us information on the density of space-time (see Volume II, chapter 3.10) – the higher the density, the more energy (mass) per space. That is why the higher dimensions that consist of much higher frequency energies are actually much denser in terms of energy per space than this 3D holographic model which is created by diluted energy per space.

In fact space does not even exist in the 5D and higher dimensions but is only an illusion of the 3D matrix created by the limited human senses and the introduction of static geometry in physics. This is accomplished by arresting the time in the minds of the physicists as this was first done by Galileo Galilei with the introduction of the Pythagorean theorem to measure the gravitation as dynamic energy exchange. They simply set time t =1/f = 1 and eliminate it from all further considerations. Since then this flaw has been perpetuated infinite times by all the physicists as soon as they perform any experiment and use geometry and/or mathematics as a method of definition and measurement through the SI system. (see also Part I and Part II on this same issue). I was the first theoretician to resolve this issue from a cognitive and methodological point of view when I developed the new theory of the Universal Law in 1995.

Figuratively speaking, the reciprocity of energy and space can be imagined as an accordion – the more folds per space (f), the higher the energy E ≈ f.  In Table 1 (right column) we can see that the Compton frequencies of the electron, proton and neutron are much greater than that of the basic photon mp . The same is true for their masses. The space of these particles as measured by their Compton-wavelengths is correspondingly much smaller than the space of the basic photon with  λA = 3×10-8  m. (see above). Such constants reflect the reciprocity of space-time – this reciprocity is inherent to all physical quantities of space-time.

Space-time is a dynamic, elastic entity (elastic continuum = “ether“) that can only expand or shrink in quantitative leaps when it is exchanged, but it never gets lost because it is closed. In reality, the expansion and contraction of space-time are the actual (visible) manifestations of energy exchange, which we perceive as motion. For instance, the contraction of photon space-time is assessed as gravitational attraction at the material level (see Volume II, chapter 4.8). This is the common view of humans, who are part of the material level. In mechanics, this exchange is assessed by velocity, which is the universal quantity of the primary term.

Expansion and contraction are the only manifestations of motion that are assessed in thermodynamics (e.g. ideal gas laws, the definition of temperature etc.; see Volume II, section 5.). At present, physics assesses energy statically as space or any other quantity relationship, e.g. as mass, time or work. This is the reason why physicists have failed to develop an idea of space-time as a dynamic, elastic entity. The concept of matter is such a static idea that has been developed in contrast to dynamic photon space-time.

The Mole Is a Dimensionless Constant

In the view of conventional physics, electromagnetic waves represent structureless, massless energy, while matter implies mass and structure. Mass and matter are often used in the same connotation – Einstein’s equation E = mc2 is a typical example of this semantic tautology. In order to abolish this energy-matter dualism (or wave-particle dualism) conclusively, I shall show here that the mass (energy relationship) of all macroscopic objects can be obtained from the mass mp of the basic photon h within mathematics and only then confirmed in a secondary manner by empirical research. This new derivation will also bestow upon the Old Testament a new scientific touch (See Genesis, Moses’ book 1, chapter 3: „It will be light. And it was light“).

We begin with the next basic SI unit for the amount of substancemole (mol)“, where the term “substance“ is used as a synonym for “matter with mass“ (see essay under point 24. in Volume II). A mole of any substance is defined as the amount of this substance that contains Avogadro’s number NA of atoms or molecules. We can regard the atoms or molecules of any substance as the action potentials EA of this substance level Emol, called “mol-level“, as they are considered to have a constant energy, respectively, mass. The energy of the system “1 mol “ can be expressed by the Universal Equation:

Emol = ENA =  EA  f

Thus Avogadro’s number NA is the time f of the mol-level of any substance NA = f. In accordance with the new axiomatics, it is constant for all substances (systems) of the mol-level. The SI unit “1 mol “ is defined through NA. It is an abstract category that is built according to the principle of circular argument and, as with all other units, it requires the arbitrary selection of a real system of reference. Avogadro’s number is defined at present as the number of carbon atoms in 12 grams of 12C.

The particular system “1 mol“ is a typical example of how one builds abstract levels or systems of space-time in physics. In this case, “1 mol“ is considered “1 action potential“ of the macroscopic substance system, which is an U-set of NA atoms or molecules; the latter are action potentials of the corresponding microscopic level (U-subset) of matter. All these abstract levels are built within mathematics and contain energy space-time as an element.

It goes without saying that this kind of discriminating space-time or matter is an abstract achievement of human consciousness. As all thoughts are U-subsets of consciousness, the latter being equivalent to space-time, any abstract definition of system or level of space-time, has a corresponding correlate in the real world. Our knowledge of the outer world is thus an a priori property of the mind because human mind is part of space-time and therefore obeys the Universal Law. Kant speaks of a priori synthetic conclusions. From the higher vantage point of view of the soul space-time is actually a creation of human consciousness.

Therefore the epistemological arrow of scientific knowledge departs from the mind and is only then confirmed in the external physical world, and not vice versa, as is believed in present-day scientific empiricism. In fact, this cognitive process is closed, just as space-time.

At present, the empiric approach is prevailing in natural sciences, while the role of consciousness as an a priori source of knowledge is completely neglected. This is the origin of the cognitive misery of science on the cusp of the greatest transformation of mankind to a 5D transgalactic civilisation – it is cogent that this misery is self-inflicted and will prevent many recalcitrant scientists from ascension because they preach fake science. Just as it is unlikely that any of the presstitutes of faked news in the MSM will have any chance to ascend while perpetuating the dark habits (lies, deception and manipulation) of their descending 3D matrix as a strategy of survival in a rapidly changing world.

As we see, the definition of “mole“ takes place within mathematics and results in a number – NA. How can this abstract number be put in relation to matter (substance)? As usual, physics resorts to the vicious principle – a new unit of mass, the so-called atomic mass unit u, is introduced. It corresponds to 1/12 of the mass of one carbon atom 12C. The new axiomatics reveals that this circular definition employs NA as a conversion factor and introduces the new unit of atomic mass u in relation to the standard unit of “1 kg“:

u = 10-3 kg / NA  = 1.6606× 10-27 kg or

1 u /1 kg = mx /mr = SP(A) = m = f = 1/ 10-3 NA

From this equation we obtain the Universal Equation for the quantity “molar mass“:

mx (kg) = 10-3 mr NA (mols) =  EA  f 

This equation illustrates the “principle of similarity“ – the universal equation holds for space-time as well as for any quantity thereof. As mass is a space-time relationship, this principle is cogent from the presentation of this quantity.

From the above equation we can calculate the macroscopic molar mass of hydrogen MA from the mass of the basic photon h as a reference mass mr = mp. In this way we shall illustrate how one can obtain the mass of any macroscopic material object from the basic mass mp of the “invisible“ photon level, which physicists conventionally regard as empty, massless space (?!). For didactic purposes, we shall only consider the mass of the proton mpr and shall neglect the much smaller mass of the electron:

M= mpr NA = (mfc,pr ) NA = 1.007 × 10-3 kg/mol ( = 1g/mol)

In this equation fc,pr = c/ λc,pr is the  Compton frequency of proton and  λc,pr = 1.321410 × 1015 m is the Compton wavelength of this particle. The latter is a known natural constant (see Table 1). This same equation can be applied for any other element in the Mendeleev’s periodic table or substance thereof.

We conclude:

It is possible to calculate the mass of any material object from the mass of the basic photon mp, that is, from the “mass of light“

We owe this “biblical“ achievement to the new Axiomatics which eliminates religion as a cosmological concept of genesis (see all my books on Gnosis and the articles on this website). Its secret lies in the novel insight that space-time is a closed entity – we can always compare the space-time of any pair of systems or levels of space-time.

Read also: An Open Letter to the Orion “Nobel Prize Committee”How to Calculate the Mass of Neutrinos.

Physics could be, indeed, as comprehensible as religion is to the layman, provided one approaches reality in a logical and deductive way. Both fields of intellectual endeavour do not need an interpreter, e.g. a priest or a specialist. Both can be substituted by mathematics – and mathematics by the new Axiomatics, which is applied logic. Logical thinking itself is an a priori capacity of the mind and is thus accessible to everybody.

 

I.4. What is Temperature? (Part 4)

Thermodynamics studies temperature, heat and the exchange of energy. This branch has the same universal role in physics as wave theory. The basic quantity of space-time in thermodynamics is temperature T. (1) It is as familiar to us as conventional time t. While the idea of time is based on the aggregated sensation of energy exchange in the body and the surroundings, mainly perceived as motion in transition, our idea of temperature is linked to the sensation of warm and cold that is transmitted to the central nervous system by tactile senses. Contrary to other abstract physical quantities, temperature and time are physiologically associated with our sensations. Precisely for this reason, though, temperature (and conventional time) has not been understood.

Temperature is defined by a change in space. In thermodynamics, this change is measured three-dimensionally as volume [3d-space]. It is very important to observe that the change in space is the primary event, while its association with thermal sensations, such as “warm“ and “cold“, is of secondary anthropocentric character. Therefore, we should clearly distinguish between the subjective perception of temperature and its abstract, geometric definition as a physical quantity.

When the Universal Equation is applied to the definition of temperature as a change in volume, we can show that it is a concrete quantity of time:

T = f = [3d-space]x  / [3d-space]R =  fR / fx = SP(A)

As with all other quantities, the method of definition of temperature is at the same time its method of measurement. This fact is at best illustrated in a survey on the historical development of temperature scales.

The method of definition and measurement of T reveals a fundamental property of space-time that has not been realized so far – temperature can only be measured in thermal contact. This fact reveals the continuousness of space-time. As T is time f, and f is a quantity of energy exchange E ≈ f ≈ T, this would mean that thermal exchange takes place between contiguous levels – space-time is continuous (primary axiom). This fundamental property of space-time also includes photon space-time. This aspect is not fully comprehended in thermodynamics.

The measurement of T takes place in thermal equilibrium, also known as the zeroth law of thermodynamics. This law says that if two objects are in a thermal equilibrium with a third (through contact), they are in thermal equilibrium with each other. This is an intuitive notion of the primary term as a continuum.

The zeroth law anticipates the existence of a common thermodynamic level of space-time, which is part of all material objects (U-subset of matter). The absolute time of this level is constant T = cons., because its space-time is also constant. I shall elaborate this aspect in detail below.

As we see, all basic ideas of physics are intuitive perceptions of the nature of the primary term. This also holds for thermodynamics. Thermal contact and equilibrium are the real prerequisites for the definition and measurement of temperature. According to the principle of circular argument, one needs a reference system (building of equivalence) to make a comparison (building of relationships).

The choice of the reference system to which the temperature of the objects is compared has evolved with time. The mercury column of the normal thermometer is such a reference system. From a theoretical point of view, the choice of the substance is of no importance – mercury can be substituted by any other substance. This liquid metal has been selected for practical reasons.

The choice of the geometric shape of the mercury column is, however, not accidental. It is a cylinder with the same cross section along the whole length of the scale, so that equivalent changes of the mercury volume lead to equivalent changes of the column length:

Δ[3d-space] ≈ Δ[1d-space].

Thus, the building of equivalent increments of mercury volume, which can be regarded as constant action potentials EA, is the a priori condition for the measurement of temperature T = f and heat Q = E = EA f. Once the building of real space equivalences is ensured by applied geometry, mathematics is subsequently introduced as the method of measurement.

The historical procedure has been the following: the normal freezing point of the water (ice-point T) has been assigned the number “0“, the normal boiling point of water (steam-point T) – the number 100. The unit of volume change is arbitrarily called “degree“ and is written as 0o C or 100o C. “C” stands for Celsius, who was the first to introduce this scale – hence Celsius temperature scale.

The length of the mercury column at 0o C is Lo and at 100o C  it is L100. The length difference  ΔL = L100 – Lo is subdivided evenly into 100 segments, so that each length segment corresponds to “1 degree“ (2). The number “100“ for ΔL is voluntarily selected. Within mathematics, we can assign this magnitude any other number, for instance, “1“ as the certain event or 1 unit, without affecting the actual measurement of temperature.

From this we conclude that the number 100 of the Celsius scale is a simple conversion factor K = SP(A) of space measurement. This becomes evident when we compare the Celsius scale with the Fahrenheit temperature scale (see exercise 1. below).

Celsius temperature tc is defined as:

tc  =  (LtLo) / (L100 – Lo ) ×100 =   ΔL/LR   =

[1d-space]x / [1d-space]R = fR  /fx = f = SP(A)

or

[1d-space]x  fx   = [1d-space] fR  =  v= v=

 [1d-space-time]thermal = cons.

The above equation proves that:

“Thermal equilibrium“ is a tautology of the constant space-time of the thermodynamic level of matter.

However, the actual space and time (temperature) magnitudes are specific for each substance or object that can be regarded as a distinct thermal system – hence the necessity of measuring its particular temperature (time) and volume (space). The same holds true for their relativistic changes.

All we can do in physics is to measure space, time and space-time of the systems and levels.

Anything else is the delusion of the conventionally thinking physicist’s mind. That is why current physics is fake science as the MSM are fake news.

Thermodynamics confirms that space-time is an incessant energy exchange. This discipline has developed the most adequate perception of the primary term. Therefore, it is not surprising that the first law of thermodynamics assessing the conservation of energy is a static perception of the Universal Law, as it is no coincidence that its discoverer, Julius Robert Mayer, was a physician as the author of this article. Both of them studied medicine in Germany and first discovered the Universal Law as a law of conservation for organic matter, and only after that confirmed it in physics (in 1842, respectively, in 1995) (3).  Space-time is a cyclic phenomenon in evolution. This is also true for the history of any scientific discovery concerning space-time (4) .

Although mercury thermometers are commonly used, they are not very precise outside their calibration points. The constant-volume gas thermometer enjoys this virtue to a greater extent. Instead of volume change, it measures change of pressure. This isobaric measurement of temperature is based on the ideal-gas law. I have shown in Volume II that it is an application of the Universal Law.

The further refinement of temperature scales reflects the inherent striving of man for precision in assessing space-time. Because of the difficulties in duplicating the ice-point and steam-point states with high precision in different laboratories, a temperature scale based on a single fixed point was adopted in 1954 by the International Committee on Weights and Measures – the triple point of water. This equilibrium state occurs at a pressure of 4.58 mmHg and a temperature of 0.01o C. The ideal-gas temperature scale is defined so that the temperature of the triple point is T = 273.16 kelvins (K), where “degree kelvin“ is a unit of the same size as the Celsius degree. The number 273.16 is thus a conversion factor (T = tc + 273.16).

As the triple point of water was found to be imprecise, in 1990 a new fixed point for the Kelvin scale was introduced based on 17 calibrating points (minimisation of systemic failure).

This is not the end of the story. With the discovery of the Universal Law, it will be possible to define a new, more precise temperature scale that will be based on photon space-time as a reference system as is the case with the two dimensions (constituents) of space-time – space and time. The scientific foundation of such a scale is based on the knowledge that temperature is a quantity of time (see Stankov’s law in Volume II, chapter 5.7). Below I have added two simple exercises for my readers to test their newly acquired knowledge on the new physics of the Universal Law.

Exercises:

1. Express the conversion factor of the Fahrenheit temperature scale to the Celsius scale in the new space-time symbolism.

2. Determine the space-time dimensionality of the coefficient of linear expansion α and the coefficient of volume expansion ß. Discuss these quantities in the light of the new axiomatics. Suggest at least three applications of the Universal Law in the production and construction of materials subjected to significant thermal expansion or contraction.

Notes:

1. We use for temperature in physics the symbol “T“ in kelvin, which is the official SI unit. When temperature is explicitly given in the Celsius scale, I shall use tc.

2. It is important to observe that the same procedure is also used to define “per cents“. The term “per cents“ is a universal numerical relationship of any real or abstract quantity.

3. While Mayer was at first rebuked for his metaphysical style of scientific presentation and suffered from neglect, we can hope that the new axiomatics of the Universal Law will enjoy a more cheerful destiny. At least, one cannot argue that I do not understand Newton’s laws as was the case with Mayer. In fact, it was Newton that did not understand gravitation. This is true for any physicist before and after him.

4. One may speculate, whether it is a coincidence that the discoverer of the Universal Law comes from Thracia, which is the cultural homeland of Heraclitus, the first discoverer of the Universal Law, the atomists, the first really modern scientists of the Old continent, and Aristotle, the universal genius of antiquity, who developed a universal categorical system of science based on the intuitive (or maybe rational) perception of the Universal Law. The answer will be given in the very near future.

 

I.5. The Greatest Blunder of Science: „Electric Charge“ is a Synonym for „Geometric Area“

Its fundamental SI Unit „Coulomb“ is a Synonym for„Square Meter“ (Part 5)

The recognition that the physical world = the universe = All-That-Is we observe with our limited senses as sentient human beings has only two dimensions/ constituents space and time and can therefore be assessed only as space-time (as already done in the theory of relativity but not fully comprehended yet by all physicists) – is the greatest revolution in the human world view, once it is fully anchored in the minds of the people. That is why I departed in this series of articles from the SI system by proving so far that five of its six basic SI units can be reduced to the two dimensions – space and time (frequency).

As it is generally acknowledged that all the other SI dimensions and units are composites of these six fundamental dimensions and units, this is the unequivocal proof that space-time = energy = All-That-Is has only two dimensions – space and time. In this context it is vital to reiterate one more time that any physical experiment contains the SI system as a method of definition and measurement of the observed physical quantities and parameters so that reliable and reproducible results can be achieved.

At the same time I have proved beyond any doubt that the method of definition and measurement of all physical quantities is mathematics and/or geometry. As both disciplines are hermeneutic categorical systems of human consciousness and have no external object of study, all physical quantities present-day physics deals with are abstract categories of the human mind and not intrinsic properties of physical matter as it is erroneously believed by all physicists today. When this knowledge is fully internalized, one has an open access to the new Physical and Mathematical Theory of the Universal Law.

So far I have proved in my previous articles that five of the six fundamental SI dimensions and their corresponding units can be derived (and thus eliminated), from the two basic constituents of space-time = energy = All-That-Is – space and time (frequency) as this is listed below one more time for the sake of clarity:

  • (1) length (metre) (Part 1),
  • (2) conventional time (second) (Part 1),
  • (3) mass (kilogram) (Part 2 and Part 3),
  • (4) temperature (kelvin) (Part 4),
  • (5) amount of substance, also called “the mole“ (mol) (Part 3),
  • (6) current (ampere) and charge (coulomb) 

The last two dimensions and SI units, current (ampere) and charge (coulomb), are defined in a circular manner so that they can be reduced to one dimension and unit as I shall explain below. Since I have discussed both quantities in a comprehensive article published on this website, I will refrain from giving the full proof here as it contains some complicated mathematical equations and necessitates a very deep knowledge of electromagnetism and quantum mechanics. I recommend my readers to read my article in full here:

The Greatest Blunder of Science: „Electric Charge“ is a Synonym for „Geometric Area“

and also Volume II on this same topic. Below I will quote the basic conclusions of this article:

“Abstract

“The current definition of the basic quantity „electric charge“ and its fun­damen­tal SI unit „coulomb“ in physics is undoubtedly the greatest blunder of modern science. When the principles of mathematical formalism are applied to this defi­nition, it can be proven in an irrevocable manner that „electric charge“ is not an intrinsic property of matter, as is erroneously believed in physics today, but a syno­nym for „geometric area“, while its SI unit „coulomb“ is a synonym for „square meter“. The reason for this systemic blunder is the incomplete, and hence, for­ma­lis­ti­cally wrong translation of the current definition of electric char­ge into a ma­the­ma­tical equation by physicists, from which they have sub­se­quently derived all known laws of electricity, magne­tism and electromagnetism. Thus, this formalistic blun­der has been repli­cated infinite times through­out the history of this science and has biased the whole edifice of physics and natural sciences from mathematical, epistemo­logical and cognitive point of view. This revo­lutionary physical and mathe­matical proof affects the very foun­da­tion of modern science. At the same time it opens the pos­si­bility for a full axioma­tisation of physics and its development to a consistent, unified theory of the physical world (see Volume II).

Introduction

The current definition of the basic quantity „electric charge“ and its fun­damental SI unit „coulomb“ in physics is, undoubtedly, the greatest blunder of science since the rejection of the geocentric Ptolemaic system of the universe in late Renais­sance, when the foundation of modern science was laid by such prominent scho­lars as Coper­nicus, Galilei, Kepler and Descartes.

Although since then billions of phy­si­cists, scientists, teachers and students have stu­died, educated and used the defini­tion of „electric charge“ in the firm belief that it is an intrinsic property of matter, and are still doing so today in schools, univer­sities and experimental research all over the world, they have obviously failed to realize that this definition of charge is, in fact, a synonym (tautology, pleonasm) of the simple geometric term „area“, which is known since anti­quity, e.g. in Eucli­dean geometry. Accordingly, the SI unit „coulomb“ is a synonym for the area unit „square meter“:

 charge = geometric area

1 coulomb =  1 m2      

The reason, why this greatest scientific blunder could have occur­red in such an „exact“ natural discipline as physics, lies solely in the fact that physicists have translated the verbal, non-mathema­tical definition of „electric charge“ in an incom­ple­te, and hence, wrong way into a mathematical equation, from which they have subsequently derived all known laws of electricity. Thus they have biased the theory of electromagnetism, and also quantum mechanics where all elementary particles of matter are supposed to have a charge, from an epistemolo­gical and cog­ni­tive point of view. This ele­mentary and incompre­hen­sible mathematical inconsis­tency has been grossly over­looked by educa­ted mankind and exposes present-day physics as fake science.

In the following, an impeccable and irrevocable mathematical proof will be pre­sented that is based on the methodological prin­ciple of mathematical formalism, namely, the principle of in­ner consistence and lack of contra­diction, also known as Hil­bert’s formalism: It will be shown that „electric charge“ is not an intrin­sic property of matter, as is believed in physics today, but a syno­nym for „geometric area“, and that the SI unit „cou­lomb“ is a syno­nym for „square meter“.

All mathematical proofs presented in this publication are accomp­lished accor­ding to established physical theory and experi­mental evidence, and adhere diligently to currently accepted defi­ni­tions in electricity and magnetism that can be found in any com­pre­hensive textbook on physics. The new, revolu­tio­nary aspect of the present elaboration is the consistent implementation of mathe­matical for­ma­­­lism in physics and the novel interpretation of the epis­te­molo­gical and cognitive background of basic physical terms.”

The two basic quantities of electricity and their SI units – charge Q with the SI unit “coulomb“ (C), and current I with the SI unit “ampere“ are defined in physics as follows:

(I) „The SI unit of charge is the coulomb, which is defined in terms of the unit of electric current, the ampere (The ampere is defined in terms of a magnetic-force  measurement…( F = E/s, when s = 1, F = E which is actually energy measurement, see Universal Equation). The coulomb (C) is the amount of charge flowing through a cross-sectional area (A) of a wire in one second (time) when the current in the wire is one ampere (action potential)“. (1)

(II) „If ΔQ is the charge that flows through the cross-sectional area A in time Δt, the current is I = ΔQ/Δt. The SI unit of current is the ampere (A): 1A = 1C/s“. (2)

This circular, tautological definition of the two fundamental quantities of electricity, charge and current, within the SI system is based on the geometric method of measurement of their units. Practically, it is based on the definition and measurement of the (electro)-magnetic force which is an abstract mathematical quantity of the primary term “energy” (F = E/s, when s = 1, F = E). This force is also called electromotive force (emf).

The classical definition of electric charge and current, as quoted above, implements mathematics in an inconsistent way and introduces a systemic flaw in electricity that extends throughout the whole edifice of physics. This has not been realized so far. When the non-mathematical, verbal definition of electric current (II) is presented in mathematical symbols in physics, the quantity “cross-sectional area A“ is omitted without any reason:

I = ΔQ/Δt.

This omission in the mathematical presentation of the current is a fundamental formalistic blunder with grievous cognitive consequences for this discipline. This becomes evident when we express the present formula of the current in non-mathematical terms:

“Electric current I is the charge ΔQ that flows during the time Δt or alternatively: “current is charge per time.“

This definition is meaningless, as physics “does not know what charge is“ (3).

In reality, the current is measured in relation to the cross-sectional area A of the conductor according to the principle of circular argument. The latter is the only operational method, with which all six known physical quantities in the SI system are initially defined within mathematics and then measured in a secondary manner in the real physical world (see above). As I have shown for the other five basic dimensions (quantities and SI units) this procedure is the foundation of the SI system – it is the universal method of definition and measurement of all physical quantities and their corresponding SI units.

The principle of circular argument operates as follows: For each specific physical quantity, defined in an a priori mathematical manner in the mind, a real physical system is chosen as a reference system and its specific quantity, e.g. energy, force, space, time, etc., is assigned the number „one“ = 1. This is a basic mathematical procedure, a primary axiom in the new Axiomatics that allows the application of mathematics to real objects.

In the above definition of charge, the reference system is the cross-sectional area A of the wire, which can be experimentally measured. The charge is then defined as a relationship to A and is thus per definition also area:

I = ΔQ/AΔt.

One can only compare identical quantities. When A = 1, the cross-sectional area may disappear optically as a quantity from the mathematical equation of the current, but it is still part of its theoretical definition. This fact has been grossly overlooked by all physicists so far and I am speaking here of millions (?) of physicists and scientists since Galilei founded physics four centuries ago.

As the electric current I and its SI unit ampere is part of this circular definition, and its method of definition and measurement is the electromotive force F which is an abstract quantity of the primary term, energy E = SP(A)[2d-space-time], it is very simple to show that electric current is defined as electromagnetic action potential:

Current = I = EASP(A)[1d-space-time][1d-space]

From this elaboration we can derive the following fundamental, universal, methodological principle concerning the method of definition and measurement of all physical quantities in physics:

Physical relationships can only be built between identical quantities.

There is no exception to this rule. Relationships between heterogeneous quantities are meaningless, unless they are associated with conversion factors that establish the equality of dimensions in a physical equation. Such conversion factors are often defined in physics as natural constants. This is the mathematical basis of modern physics that should be the topic of any true methodology of this natural science.

The aforementioned basic formalistic considerations regarding the application of mathematics in physics were made for the first time in this theoretical clarity by myself after I discovered the Universal Law and developed the new physics in the 90s, although they have been intuitively followed in conventional physics, unfortunately not in a consistent way, as has been shown for the definition of charge above.

It is a basic axiomatic knowledge that:

it is sufficient to introduce only one wrong statement in a mathematical system to bias the whole system.

This knowledge, as proven by Gödel in 1931, has undermined Hilbert’s formalism, with which the consistency of mathematics ought to be proven by finite procedures (4). This has triggered the foundation crisis of mathematics (Grundlagenkrise der Mathematik) as embodied by the continuum hypothesis and the famous Russell’s antinomy. This crisis is still ongoing, notwithstanding the fact that nowadays all mathematicians and theoreticians prefer not to take any notice of it.

Since physics is applied mathematics to the physical world, the ongoing foundation crisis of mathematics also affects the theoretical foundation of this natural science. Gödel proved essentially that mathematics, being a hermeneutic discipline without an external object of study, cannot furnish the missing proof of existence (Existenzbeweis) by finite procedures and thus achieve its full axiomatisation with its own means. Each time such formalistic procedures are applied to the structure of mathematics, they lead to fundamental antinomies and challenge its very foundation. Gödel’s theorem tells us in plain words that, in order to solve its ongoing foundation crisis, mathematics should seek its proof of existence in the real physical world.

The goal should be the establishment of an integrated physical and mathematical axiomatics based on finite procedures, with the help of which the proof of existence should be empirically rendered. Such an axiomatics should depart from a small number of primary axioms – ideally from a single primary axiom – that are valid in both physics and mathematics, so that there will no longer be any artificial theoretical separation between the two disciplines.

The new Axiomatics of the Universal Law departs from one single term, the primary term and axiom, which is both the origin of physics and mathematics:

Primary Term = Energy = Space-Time = Continuum =

Continuum of numbers = Infinity = All-That-Is

The theoretical results of the present publication in the field of electricity and electromagnetism shows that this task can be easily achieved within the existing structure of physics by consistently implementing the principles of mathematical formalism and thereby eradicating all mathematical, formalistic blunders that have been historically introduced in this natural science. Such mathematically inconsistent statements and definitions contaminate the structure of present-day physics, where all mathematical equations are essentially correct and all their verbal interpretations are entirely wrong.

This has hindered the unification of physics and its natural evolution to a transcendental biophysics as I have done in the new General Theory of Science of the Universal Law (read also here). In fact, present-day, conventional physics is a “fake science” in terms of true cognition of All-That-Is, just as the “fake MSM news” are a total distortion of the political and economic reality in which humanity dwells on the cusp of its ascension.

Present-day physics is incapable of grasping 3D space-time as a holographic image of the limited human senses and perception and its current transformation to a multidimensional simultaneity where the identical physical quantities (dimensions), conventional time and space (as distance), are eliminated as a human illusion once and for all.

Only energy and frequencies really exist in All-That-Is

At present, physics, being a scientific categorical system for the physical world, cannot adequately reflect the unity of Nature – for instance, gravitation cannot be integrated with the other three fundamental forces in the standard model, and there is no theory of gravitation at all. The elimination of these mathematical inconsistencies from the theory of physics by myself has allowed the development of this natural science to a truly axiomatic system of Nature based on the primary term of human or any other consciousness in All-That-Is.

This accomplishment was the much endeavoured unification of physics by many renowned physicists on the basis of mathematical formalism since the beginning of the 20th century. This was however first accomplished by the author in 1997 when he published his first volume on the new physical and mathematical theory of the Universal Law and then further developed in volume II that can be read independently of volume I and contains many more advanced derivations that cannot be found in the first book.

Essentially, volume II is a comprehensive textbook on physics, theory of mathematics and cosmology and contains the entire theoretical content as can be found for instance in the very popular textbook on physics for students at universities written by P A Tipler, the design of which I used as a raw version for my books on physics as to facilitate the didactic approach of the reader to the new revolutionary theory of the Universal Law.

Notes:

1. Tipler, PA, Physics, Worth Publ., New York, 1991, p. 600.

2. Tipler, PA, p. 717.

3. Tipler, PA, German ed., p. 618.

4. Gödel, K. Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme, Monatsheft für Math. und Phys.. 1931, p.173-198.

 

I.6. Galilei’s Famous Experiment of Gravitation Assesses the Universal Law with the Pythagorean Theorem

A Fictitious but Scientifically Very Truthful Report Beyond Time and Space

Foreword

My idea to write this playful essay was to show that knowledge is eternal and exists beyond time and space. It is an excerpt from Volume II on Physics and Mathematics (page 381 – 386). Galileo Galilei’s experiment I am referring to in this essay has really happened and marked the beginning of modern experimental physics. I saw the presentation of this experiment in 1997 in a special exhibition in the world famous “Deutsches Museum”  in Munich that is dedicated to science, engineering and technology throughout the ages.

I am talking about the famous Galileo’s inclined plane experiment of gravitation of which there are numerous variations. The one I saw used a geometric presentation of a series of rectangular (right) triangles with the same perpendicular hypotenuse and varying sides (cathetus) placed in a circle so that the hypotenuse was the diameter of the circle.

I searched on the Internet for a visual presentation of this specific experiment I saw in the museum but could not find one. There are many other versions of this experiment which are rather confusing. Therefore I made a drawing of this experiment as I remember it and have added it to the text below.   

When I wrote this essay I was fully channelled by the Source and I could hear the giggle of the angels that were thrilled by the simplicity and incredible clarity of my humorous scientific argumentation that spanned a bridge from the major scientific ideas in Antiquity to Modern Times when science first emerged as applied physics in this famous experiment of Galileo Galilei on gravitation, who since then is considered to be the father of modern physics. 

Essay

“ All truths are easy to understand once they are discovered; the point is to discover them.“   Galileo Galilei

Before Galilei starts with his experiment, he argues as follows: “The theorem of Pythagoras which I have used for the construction of this experiment says that: c² = a² + b². According to this equation, it does not make any difference if the ball is falling to the earth in a free fall along the perpendicular hypotenuse c or along the inclined path consisting of the sides (a+b). If I define the work which my assistant does to carry the ball to the top of the triangle as “energy“ with respect to my favourite philosopher, Heraclitus, this would say that the energy of the falling ball will be the same, no matter which way it falls down to the same point on the earth. From the geometry of the triangle, I can assert that the energy (work) remains unchanged, independently of how the ball moves from one point to another.

IMG_1695

To prove this hypothesis, I must measure the falling times in a, b and c and compare them. To ensure that I do not commit any mistake, I shall change each time the length of the inclined tubes as the sides of the right triangle and measure the falling times of the ball for various side lengths of a and b of any right triangle in the circle.

After the experiment, Galilei analyses the results ad alta voce: “My experiment on gravitation shows that the falling time, tempo t, of the ball, which I have chosen as a representative object of matter, materia m, is independent of the slope of the inclined tubes: the falling time for the perpendicular hypotenuse c is equal to the falling times for any length of the inclined tubes a and b as the sides of the right triangle. Therefore I can write this practical result as follows:

 tc = ta = tb = t = constant

In this case, I can use the famous Pythagorean theorem, which I have already employed for the construction of my experiment, to present the results in a simple mathematical equation. This method has recently become quite popular, after that French youngster Descartes and his followers, the Cartesians, are keen in explaining the world from the mind by employing the geometric method – they call it boisterously the “Cartesian method“. Why not! This may be a good idea.

As far as I remember, it was Descartes who wrote about the conservation of movement in the universe? This is exactly what I have observed in my experiment on gravitation. Indeed, it would be “una buona idea“ to test if the theorem of the old grand master also holds for earth’s gravitation. If I am lucky to prove it, I will at the same time present evidence that the Aristotelian system of forms, which is based on the Pythagorean school, also holds in gravitation. This will be an excellent confirmation of the validity of ancient Greek science in the spirit of Italian Rinascimento (Renaissance).

On the one hand, the system of Aristotle has not been challenged since antiquity; it is generally accepted among scholars and does not need any additional confirmation. On the other hand, I have read that most Greeks were contemptuous to experiments and did not bother  much about scientific experience – for them Geometry was the ultimate Truth. If I could now prove that Geometry holds for earth’s gravitation – this divine force of matter – I will be the first scholar to show convincingly that Nature operates according to Geometry.

Pythagoras teaches us that “everything is number“. Could it be that his theorem is also valid for the new system of Copernicus, as my intuition whispers me when I reflect on my recent astronomic observations of the planets’ movement? In this case I have to refute the Ptolemaic system, to which this god-damned church sticks without any grounds. Take care, old chap! The spies of the inquisition have flooded even the free town of Florence. You better solve this problem for yourself and keep it secret during your lifetime. Let future scientists re-discover the mechanism of gravitation and the motion of planets when life will be less dangerous than in our turbulent times.

Let us now order the results of the experiment in a logical manner. If the time t of the falling ball m is constant in any of the tubes a, b and c, I can introduce the falling time t and the ball m as mathematical symbols in the Pythagorean theorem. For this purpose, I have to multiply the hypotenuse c and the sides of the right triangle a and b with the term m/t² :

c² = a² + b² ∖ × m/t² .

This artificial mathematical operation will not alter the initial validity of the famous theorem. On the contrary, it will bring a real physical meaning to this abstract theorem of Geometry – from now on, it will also hold in gravitation:

m(c²/t²) = m(a²/t²)  + m(b²/t²)  (259)

This is a pretty good result, but my intuition tells me that I have to present this mathematical equation in a more adequate form. Let us try it now! The hypotenuse and the sides of the right triangle are straight lines. According to Euclid, they have only one dimension, which I can present as “1d “. I can express these straight paths with the symbol [1d-spazio] for one-dimensional space. The time t measures how “quick“ the movement of the falling ball is. As the ball needs the same time to fall in c as in each of the sides, a and b, of the right triangle, the movement of the ball is the “quickest“ during the free fall in the hypotenuse because c is longer than any of the sides, a or b.

If I now build a quotient of space (spazio) and time (tempo) I will have an adequate measure to compare how “quick“ the movement of the ball is. This is, indeed, a brilliant idea! As far as I know, nobody has come to this idea before. I will call this new mathematical quantity “velocita“ (velocity) and express it mathematically with the first letter of the word “v“. I can now write the following equation:

v = velocita = [1d-spazio] / [tempo] =  [1d-space] / t

(Nota bene: Before Galilei the concept of velocity (speed) did not exist and humans were unable to measure how quick a movement was but only used verbal descriptions such as “quick” and “slow”. This physical quantity v = s/t was first introduced by Galilei in this experiment and since then it is the backbone of classical mechanics and physics as a whole. I have proved that velocity is a universal geometric presentation of one-dimensional space-time as energy which all physicists use in an unconscious manner without understanding the epistemology of this quantity as they have not grasped the essence of energy as consisting of only two dimensions/constituents – space and time – as proven beyond any doubt in the new Theory of the Universal Law.).

Not bad, but I am not satisfied with this presentation. Building quotients like this one takes a lot of space and paper is expensive nowadays. I can solve this practical problem by defining the reciprocal time 1/t as tempo fisico (physical time) and use the first letter of the word “fisico“ as a mathematical symbol for this quotient:

f =  1 /[tempo]  = 1/t.

Thus, physical time f can be easily distinguished from (t)empo ordinario t (conventional time). Now, I can write for the velocity: v = [1d-spazio] f , or simply:

v = [1d-spazio-tempo] = [1d-space-time].

I think this is a simple expression, which any educated man with a modest knowledge of mathematics will immediately understand. I shall now express the Pythagorean theorem with the new symbols, so that everybody can learn this equation of gravitation by heart without realizing that I have borrowed it from Pythagoras. This is a good method to hide my initial source of inspiration:

m(c ²/t ²) = m (a ²/t ²) + m(b ²/t ²) = mvc²=  mva² +  mvb² =

m[2d-spazio-tempo]c = m[2d-spazio-tempo]a + m[2d-spazio-tempo]b = cons.   (260)

Galilei contemplates for a long time before he speaks again: “If I am honest, it is unfair to hide the name of the greatest scholar of antiquity, to whom I owe my entire scientific knowledge. I must find an elegant solution of paying reverence to Pythagoras without going into troubles with the inquisition, which looks with a bad eye upon his Geometry.“ He thinks intensively: „Now, I got it! I will substitute the symbol for the ball m with a new symbol of abbreviation: “SP(A)“ for “il Supremo Pythagoras di (A)ntiquita“. I like this very much! (In the new Theory of the Universal Law I use this symbol for the “statistical probability of the event A – SP(A)” in order to show that statistics is another adequate mathematical method of assessing the physical events of space-time = energy in addition to Geometry. Note, George)

Similarly, I will express the constant (e)nergy of the ball in a free fall mc² /t² with the first letter “E“ of the name of its first discoverer – “il grande filosofo di Efeso – Eracliteo.“ In this way, I will pay tribute to the two greatest philosophers of ancient Greece in my General equation of gravitation:

E = SP(A)[2d-spazio-tempo] = SP(A)[2d-space-time] = cons. (261)

Strange! I have an awkward feeling that I have met this equation before. I am sure that it can’t stem from another contemporaneous physicist. As there are only few physicists like me in Italy and North Europe, I am well acquainted with their works. Could it be that I have met this equation in the works of that wizard – an excellent mathematician and astrologist with an incredible virtue of prophecy – who had died in Salon-de-Provence only two years after I was born. What was his name?

Ah, yes, I got it, they called him Nostradamus! I must have hidden his apocryphal books somewhere in my private library. I remember that I bought them from a beggar who knocked on my door some years ago. He was selling beautiful books written partly in Latin and partly in French. I had never seen such books before. I must find them and check their content again.“

He is searching in his library: “Ah, here they are! Let me see (he reads). What an ambiguous and secret language! Poor guy! His life must have been as insecure as mine. Yes, I have found what I am looking for. Nostradamus foretells the arrival of an unknown scholar of Byzantine origin who will come to the West and will (re)discover the Universal Law of nature at the end of the second Millennium“

(Nota bene: Bulgaria was the first Slavonic and Christian state on the Old Continent since the 7th century and was a cultural mirror image of Byzantine with which it fought numerous wars, in many of which the Byzantine army was crucially defeated. My birthplace Plovdiv was the capital of the rich Roman province Thracia for many centuries and then an important city in the Byzantine empire after the Reptilian Emperor and founder of the state church of Christianity as Caesaropapism (for more information read my recent comments here) Constantine “the Little” moved the capital of Rome to Constantinople on the Bosphorus. Plovdiv is the oldest city in the world with an uninterrupted history that goes back to the 5th Millennium B.C. based on excavations and material facts.).

Galileo reads from Nostradamus’ book:

“After much “trial and error“ in science, lasting for more than four centuries from now on, this man will unify science and will trigger a new renaissance of Greek Logic, similar to that we observe in arts and literature in Western Europe after the fall of Constantinople.“

Galilei murmurs: “What a coincidence! This man uses the same equation for Heraclitus’ primordial energy (flux) as myself. Excellent! It was a very good idea to think of Nostradamus. One never knows where one’s inspiration will come from.“ Galilei is excited. He turns the pages of Nostradamus’ book forth and back: “Ah, what do I see? This Byzantine scholar must have had some predecessors during Novecento (20th century). Their names are Lorentz, Einstein and some more, especially Einstein is often mentioned by Nostradamus. But this is incredible! How is it possible that so many physicists are working on the same problem? This will never happen in Italy today. All these scholars are using geometric formulae to solve physical problems. Here, Nostradamus gives us an example.“

Galilei reads further with an expression of incredulity on his face: “Mamma mia! They also use the Pythagorean theorem, but what a complicated mathematical expression have they chosen! Vergogna! Now wait! How do they call this equation? – the right triangle theorem of the total relativistic energy in relation to momentum and rest energy:

= (pc)² + (moc²)²    (262)

Dio mio, this is my geometric theorem of gravitation – only written with other symbols! I must scrutinize it.“ He reads further: “Now, I see. These scholars depart from the equation of the relativistic energy (231) and the equation of the relativistic momentum p, which is obviously a mathematical iteration of the above equation.

What does the future Byzantine scholar say about this result? Yes, he is in accordance with me. He proves that the equation of the relativistic energy is an application of the universal equation of Heraclitus’s primordial fire as obtained by myself for gravitation. The same is also true for the relativistic momentum, which is a mathematical quantity of the primordial energy and has no real existence. That’s good! It seems that I am on the right track.

This scholar shows that the above equations are mathematical abstractions that merely assess the “continuum of numbers or probabilities“. This expression is new to me. I only know of the continuum of geometry – Plato and Aristotle tell us about the ideal forms of the geometric continuum that assimilate real forms, but why not use the continuum of numbers for the same purpose. Most probably, both terms are identical. Anyway, it is a well-known fact that we can express any geometric solution in numbers and vice versa.

Take for example the irrational number √2 , which follows from the Pythagorean theorem. Plato says that this number symbolizes the incommensurability of the geometric continuum. Therefore the continuum of numbers expresses the continuum of Geometry with different symbols – we can replace any geometric symbol with a mathematical one and vice versa. This is exactly what I have done in my equation on gravitation.“

Galilei turns the pages hastily and reads at random. He is bewildered: “This is, indeed, a pure nonsense! Lorentz and Einstein, or whatever their names will be, assert that the aforementioned relativistic equations of the Pythagorean theorem prove that the velocities of the particles cannot be greater than the speed of light because otherwise their solutions “will give imaginary numbers“. What a stupid argument! Aren’t they aware of the fact that all numbers are imaginary signs? They are symbols of the mind – the Platonic shadows of the real world. Why don’t these guys study Greek philosophy! This will help them avoid such stupid conclusions.

As I see, the Byzantine scholar also disproves their conclusion. Good! He proves that the aggregated velocity of the particles is greater than the speed of light (equation (189c)). If velocity is a mathematical quantity of energy, as I have defined it for gravitation, it follows that the particles of matter must have a greater energy than light. This physical fact was predicted by the famous Thracian atomist – Democritus. He teaches us that atoms have emerged from light – they are condensed light and must have more energy than light. In this case, their velocity is greater than that of light. Democritus is, indeed, a good student of great Heraclitus who says: „Da tutte le cose ne sorge una sola, e da una sola possono sorge tutte (217)“

This is an exciting idea. I will have to work it out, after I have finished with this experiment and, if I may hope, the inquisition will no longer bother me. Heraclitus idea that all objects emerge from light (flux) and disappear into light seems to be a key idea of this Byzantine scholar who also comes from Thracia. Indeed, to believe that the speed of light is the maximal possible speed, only because a mathematical solution of an artificial equation will render imaginary numbers is not at all convincing to me. I wonder how many physicists will earnestly believe this nonsense in the future. I suppose that such erroneous conclusions stem from a misapprehension of the fact that physics is applied mathematics.

Only when this fact is well understood, can we perceive why most non-mathematical interpretations of physical results are not true. I recommend all future scholars to consider my advice seriously, not only because I am the founder of modern physics, but because I am in the first place an excellent mathematician.“

Galilei scrutinizes Nostradamus’ books silently for a while, then exclaims: „There it is! Lorentz, Einstein & Co. seem to realize this truth too. They argue that if E is much greater than the mass at rest mo in equation (262), that is, if moc² → 0, then E = pc; this would say that if the side of the right triangle b approaches zero b →0, then a will approach c: a → c. Evidenza! In this case, the energy in a is equal to the energy in c. Questo lo chiamo “instinto di conservazione“ (218). Ecco la! Energy cannot be destroyed. How right was Heraclitus to say:

“Il mondo che abbiamo intorno, e che è lo stesso per tutti, non lo creò nessuno degli Dei o degli uomini, ma fu, è, e sempre sarà, Fuoco vivente. Un bel Fuoco che divampa e si spegne secondo misura (219).“

Notes:

217. One thing emerges from all things, and all things can emerge from one thing.

218. “I call it the “conservation of momentum“. This is it!“

219. “The world which surrounds us is the same for everybody, no God or humans have created it, but it was, is, and will always be a living fire. A wonderful fire that extinguishes and ignites to a precise measure.“

I.7. Why the Pythagorean Theorem Is in the Core of the Current Geometric Presentation of Most Physical Laws

The geometric meaning of the Pythagorean theorem is that the square of the hypotenuse measured as surface  c² = [2d-space] is equal to the squares of the two sides of the right triangle a² + b² which are also surface:

c² = a² + b²  = [2d-space]c = [2d-space]a + [2d-space]b

Below I have published several graphic illustrations of this intrinsic meaning of the Pythagorean theorem:

 

 

Image result for pythagorean theorem, images

Image result for pythagorean theorem, images

I have shown in the new Physical and Mathematical Theory of the Universal Law that most separate laws in physics are described within geometry as area = [2d-space]. This has been done by all the physicists throughout the centuries since Galileo Galilei first measured gravitation as a free fall and along inclined planes by employing the Pythagorean theorem and introducing the new physical quantity velocity:

v = s/t = sf = [1d-space-time]

In many cases these particular physical laws, such as the various laws of gravitation or the laws of electricity and magnetism, were then presented in a static way as scalars in classical mechanics (statics), electromagnetism and thermodynamics (Scalars are mathematical numbers that are used for vectors which are geometric presentations. I will discuss the vector rule below.). This was achieved by employing a very simple mathematical trick that the scientists did not process methodologically and have not realized to the full extent until the present day. This omission is one of the chief sources of their cognitive blindness and most of the systemic blunders they committed in physics.

The most auspicious one is to define the area as charge in electricity and to believe up to the present day that the SI unit one Coulomb is a measure of charge that really exists in the particles of matter, while it is in fact a synonym, or rather a pleonasm, for one square meter. I have thoroughly investigated this colossal blunder that has confounded the entire theory of electromagnetism, and from there the theory of quantum mechanics as presented in the standard model which erroneously postulates that all elementary particles must have a charge, in this pivotal article:

The Greatest Blunder of Science: „Electric Charge“ is a Synonym for „Geometric Area“.

In other words, scientists have not grasped yet the epistemological foundation of all physical terms and quantities they have introduced from Geometry and/or Mathematics into Physics throughout the ages as to describe quantitatively Nature in its plurality of distinct energetic phenomena. They believe erroneously to the present day that the mathematical and geometric abstractions (as abstract definitions of physical quantities) they have introduced in physics really exist in nature, e.g. as properties of matter, while they are in fact mere Platonic shadows of their unprocessed minds as also Galilei argues during his experiment on gravitation in my essay.

These unreflecting physicists have simply decided in an a priori manner to set the conventional time t=1 and exclude it from all further presentation of space-time by simply eliminating it from their original presentation of space-time as velocity v or square velocity v², which is actually the physical quantity “gradient” (e.g. electric or mechanical gradient):

v = s/t = sf = [1d-space-time] = [1d-space] = line/vector, or

= s²/t² = s²f² = [2d-space-time] = [2d-space] = area, when t =1/f = 1

This is the mathematical operation with which the scientists artificially eliminate motion from their secondary geometric presentation of energy = space-time = All-That-Is in physics and operate mostly with static geometric magnitudes such as surface/area or straight lines (distance or vectors).

This is the only and sufficient reason and explanation as to why the scientists failed to grasp the nature of energy = All-That-Is and missed the existence of the Universal law which assesses energy exchange dynamically.

Instead of discovering the simplicity of the physical world, all the physicists derailed their view of the world  by inventing a plethora of geometric presentations of the energetic phenomena they observed and defined as gravitation, electromagnetism, quantum effects, etc. which are various forms of energy exchange. To this day all scientists erroneously believe that the four fundamental forces have a distinct real existence only because they have introduced so many particular laws and equations to describe them. This is the insanity of modern-day physics which I have revealed for the first time in the new Physical and Mathematical Theory after I discovered the Universal Law of Nature. Or as Galilei says, the truth is always very easy – and very painful for most people and scientists – but one must discover it first.

In reality, all the physical quantities and their corresponding laws one finds in present-day textbooks on physics are mere inventions of the scientists’ mind by employing for the most part Geometry and later on Mathematics as its commutative system (transitive system). As we all know, both systems are hermeneutic disciplines of abstract human thinking and subsequently have no external object of existence. This is the famous proof of existence of mathematics theoreticians are still looking for in vain after they discovered that mathematics is in a foundation crisis at the beginning of the 20th century by acknowledging its inability to prove the validity of its own existence with its own hermeneutic means. This is also known as Hilbert’s formalism or Hilbert’s program for the axiomatization of mathematics and geometry which he first announced in 1900 at the mathematical conference in Paris.

This crisis lasted till 1995 when I resolved it in the new Physical and Mathematical Axiomatics by proving that all mathematical presentations of physical laws in form of mathematical equations are derivations of one single law which I then defined as the Universal Law. As this law assesses energetic interactions (energy exchange), because there is nothing else in All-That-Is, the Universal Law is a Law of Energy.

Humans can only perceive energy with their limited senses as space-time and that is why the Universal Equation is given as square space-time:

E = v² = s²/t² = s²f² = [2d-space-time]

As any energy exchange can be measured only with respect to a reference system, such as the anthropocentric SI system, energy is always measured as a quotient. This means that all values given for any particular amount of energy are dimensionless numbers that belong to the continuum set of all numbers in mathematics. This is very important to know. In order to consider this basic fact, which all physicists have not realized yet to the full extent, I have introduced the universal abstract symbol SP(A) which stands for the “statistical probability of the event A“.

This symbol has been introduced only for practical purposes as to make the recalcitrant physicists aware of the fact that the partial mathematical discipline “statistics” which they have lately introduced in their discipline, since Boltzmann used it first in thermodynamics, is nothing else but simple mathematics. In this case the continuum set of all numbers (0,∞), which Frege and Cantor introduced for mathematics at the end of the 19th century and thus paved the way for the foundation crisis of mathematics, is identical to the probability set (0,1) of modern statistics (probability theory). That is why I present the Universal equation in the following manner:

 E = SP(A) = SP(A)s²/t² = SP(A)s²f² = SP(A)[2d-space-time]

This equation has the advantage that it is valid for all current natural laws in physics of which there are more than 100 currently, if one believes the standard textbooks on physics. That is to say, the way they are presented nowadays in textbooks of physics these partial laws can be easily expressed, through a simple mathematical transformation, with the above universal equation. This proves that these particular physical laws are mere applications of the Universal Law. This is the utmost simplification of physics ever achieved in the rather short history of this discipline of less than four centuries since Galileo Galilei performed his famous experiment on gravitation around 1634.

This same Universal Equation can be presented in many ways within mathematical formalism. I have chosen a second universal presentation which accounts for the fact that energy is of discrete character, which means that it is quantized and is exchanged in energy packages (quants) of constant amounts of energy that are specific for each particular energy exchange. For this purpose I have introduced the term “action potential” Ewhich includes all possible quants (packages) of energy exchange in All-That-Is. This introduces another great simplification in physics.

When the action potential is used in the Universal Equation, then it can be written in the following way both as a normal mathematical equation and in the new space-time symbolism, which I first created and introduced in physics in 1995:

E = EfSP(A)[1d-space-time][1d-space] f, where

EASP(A)[1d-space-time][1d-space]

This is the basics of the new physical and mathematical theory of the Universal Law and if you have grasped these presentations, you have grasped the Universal Law and how Nature operates under this Law of One.

For instance, the famous Einstein equation regarding energy E = mc² is an application of the Universal equation for photon space-time which is characterized by the speed of light c. In the new theory of the Universal Law I prove beyond any doubt that the physical quantity mass does exist but is in fact an energy relationship when the current method of measurement and method of definition of this physical quantity is properly analysed, that is to say, when it is axiomatically assessed. I will discuss the method of definition and the method of measurement of basic physical quantities and their corresponding units within the SI system in my next popular-scientific article on the Universal Law.

The current belief all physicists share that mass is an intrinsic property of matter as this is presented in Newton’s law of gravitation is probably the greatest blunder in physics, together with the wrong idea to define the area of particles as charge (see above). In this case m = E/Er , where Eis the reference system which in SI system is 1 kg. From this anybody can conclude that mass is a quotient of two energies and as such it is a dimensionless number as the unit kg is cancelled in the quotient. A dimensionless number belongs to the continuum set or to its equivalent probability set which is presented in the new Axiomatics as SP(A): m = SP(A). In this case we can write Einstein’s equation of energy in the following way in the new space-time symbolism of the Universal Law, where the index p stands for photon space-time:

E = mc²SP(A)[2d-space-time]p

It is very simple indeed when one realizes all the blunders that scientists have introduced in physics. The great popularity of Einstein’s energy equation lies in the fact that it is a concrete application of the Universal equation and this explains its universal validity. It is generally accepted by all physicists that this equation assesses the energy of all matter although they have failed to explain why and have resorted to the notion that this equation is a stroke of a genius. Nothing is further from the truth as Einstein personally is responsible for the greatest blunders in physics, through his theory of relativity, that has pushed this science backwards for almost a century until I came and corrected them in the new theory of the Universal Law. I have discussed all the mistakes of Einstein in physics in detail in volume II and in particular in the section on the theory of relativity.

Einstein equation of energy is of universal character because photons are the building particles of matter. When the latter is assessed in terms of mass, the mass (energy relationship) of all particles can be derived (measured) from the mass of the elementary photon mp, which is a new fundamental natural constant I first discovered in 1995 (see Table 1 on the homepage). By the way here is another colossal blunder of physicists – up to the present day they believe that photons do not have mass as they are not capable of grasping their own definition of mass, which is simply energy relationship and not an intrinsic property of matter. The same holds true also for neutrinos notwithstanding the overwhelming evidence against this assumption in the standard model as I wrote last year to the Nobel Prize committee.

Because of this blunder scientists cannot account for more than 90% of the mass that should be in the universe according to their theoretical models. In order to repair this blunder they have introduced another gargantuan blunder – the existence of dark matter in cosmology which cannot be found. For more information I recommend to read the section “Cosmology” in volume II. Until now science as embodied in physics was essentially a joke, but with the development of modern cosmology after WW2 it has become “fake science”. After people have finally realized that all MSM are fake news, as I am preaching on this website since its inception, now is the time to begin to realize that present-day science is also fake science and fake knowledge, just as economics and the corresponding financial system is one giant Ponzi scheme.

All these blunders in physics go back to Einstein who rejected the existence of photon space-time as a distinct energetic level of All-That-Is in his relativity theory and postulated instead the existence of vacuum where gravitation and electromagnetism are propagated as an “action at a distance” also defined as “long-range correlation”. This was probably the greatest blunder of Einstein, among many others in his physical thinking, and it explains why he himself was unable to grasp the true meaning of his famous equation.

I was the first physicist to show that the mass of all elementary particles and the macromass of all material objects can be easily calculated from the elementary photon and that is why this application of the Universal equation is valid both for matter and photon space-time (see Table 1). The mass of the elementary photon is part of the Planck constant h which is the smallest action potential (quant, energy package) material instruments can discriminate and measure. It is the cognitive limit of any physical and human knowledge when the personality is separated from the Source and has no contact to her soul at the ego-mind level.

The Planck constant, itself, is in the core of the Heisenberg uncertainty principle that plays a central pseudo-ideological role in quantum mechanics. Before I explained the true meaning of this concept in the 90s, scientists had tried to interpret it in a rather clumsy and unsophisticated manner, sometimes by employing a huge mathematical apparatus only to hide their ignorance. What this principle actually tells us, is that both humans with their limited senses and their material instruments cannot assess the underlying higher frequency, higher dimensional energies of the 7F-creation levels which are also the levels of the soul and other higher entities as Elohim. Heisenberg defined this fact as “Undeterminiertheit” (Undefinedness) of quantum physics, which is indeed a very awkward term as it is not about quantum physics but about the limitations of human perception.

Currently scientists can only assess these two entities, matter and photon space-time, with their material instruments and thus have no clue that both levels of energy are secondary creations of higher dimensional, higher frequency energies, to which also all human souls belong. This fact is currently rejected by all agnostic scientists as “esoteric crap”. That is why they will be shocked when we ascend and demonstrate the true nature of humans as multidimensional beings. This will also mark the end of current narrow-minded, empiric physics and the beginning of the new transcendental biophysics of the Universal Law which will truly flourish in the 5D and higher dimensions where part of humanity will ascend in the course of this year.

At this place it is important to stress one more time that humans can only perceive energy as space-time with their limited senses, which is how this 3D-holographic matrix is created as a very realistic illusion. In reality, there is no space in the higher realms but only frequencies. I have further shown that the physical quantity space s is identical with the physical quantity “conventional time t“. The lack of understanding of this simple fact has led to the most grievous cognitive dissonance in the petty human psychology of scientists and esotericists alike. If one analyses all the channelled messages according to this criterion, one can very easily expose them as fraudulent and not coming from the higher realms, where this fact is a well known truth and reality.

This is indeed the most difficult notion for any enlightened being to perceive as our very understanding of human existence is linked to space and time. As long as humans are aware of the fact that

energy= All-That-Is = the primary term of our consciousness = space-time

can only be perceived as space-time and follow this knowledge in the mathematical presentation of all applications of the Universal Law, as I have done in the new theory, all is well. The moment scientists begin to eliminate time as frequency from the equations by assigning it the number 1 and operate only with static geometric quantities such as area, then the cognitive malaise begins.

Why? Because in this way scientists eliminate the motion (movement) of energy, which is its inherent universal property, in their surrogate mathematical presentations of space-time = energy = All-That-Is, and from physics, that was meant to be an exact human science that truthfully describes energy = All-That-Is. This is the primary source of all blunders and all illusions in science and daily life.

Why are scientists doing that? Space-time assesses the dynamic aspect of energy as constant energy exchange. However, as scientists have great difficulties to measure energy exchange, they have to arrest time in their minds (not in reality) and present it as a static immobile quantity. In most cases they present space-time as area, for instance, as the square of the hypotenuse and the sides of the right angle in the Pythagorean theorem:

E = SP(A)v² = SP(A)s²/t² = SP(A)s²f² = SP(A)[2d-space-time] =

SP(A)[2d-space], when t = 1

This unprocessed mathematical (geometric) presentation of space-time = energy as space in physics has contributed more to the current illusion of this 3D-holographic reality than any other false scientific idea, of which there are plenty, because it carries the nimbus of scientific exactness and experimental reliability. In fact, it is a perpetuation of the cognitive insanity of all humans including the small elitist faction of humanity that define themselves as scientists.

The broader use of [2d-space] as physical quantity can be related to another common geometric method of presenting forces – the vector rule, of which there are numerous applications and presentations that only complicate physics and its understanding by also introducing the sine and cosine function:

Image result for vectors assessing forces in physics, images

If you look closely, they all depart from the Pythagorean theorem and its practical application as the Parallelogram method.

Image result for vectors assessing forces in physics, images

I assume that all my readers have studied these methods in school so that I will not dwell on them here.

As physicists had to acknowledge that space-time is energy in motion and has most of the time a direction, they had to modify their geometric method of presentation as to account for this universal intrinsic property of energy = space-time. For this, and only for this reason, they have introduced the concept of vectors which is simply a straight line with a direction as an arrow. Vectors are used in physics to describe forces (energy interactions) as motions with direction.  That’s all. From there, the physicists have developed the vector rule which everyone of my readers must know from geometry and physics at school. Here are a few practical applications of the vector rule in classical mechanics:

Image result for vector rule, images

Image result for vector rule, images

And even as a hand rule in electromagnetism in the acknowledgement that electric and magnetic forces as field forces also have a direction:

Ultimately, all these practical rules can be reduced to the Pythagorean theorem as the universal geometric presentation of space-time. That is why the Pythagorean school has played such an eminent role in the history of human science, philosophy and Gnosis. Its mystery has been finally unraveled with the development of the new Physical and Mathematical Axiomatics of the Universal Law.

 

I. 8. Doppler Effect Is the Universal Proof for the Reciprocity of Space and Time

In my previous publications on the SI system I proved unequivocally that the physical world = space-time = All-That-Is has only two dimensions – space and time. I did this by showing that all the other SI dimensions (quantities) and their corresponding units can be derived from the two constituents of space-time when their current definitions are properly translated into mathematical language, which physicists have failed to do since the inception of this science when Galilei first measured gravitation.

In this article I will discuss the Doppler effect and will explain why this ubiquitous effect as presented in wave theory is the universal manifestation and proof for the reciprocity of space and time.

I dealt with this issue already when I explained how the SI units for space (distance) and conventional time (t = 1/f = reciprocal frequency) are derived from the speed of light c of a reference photon system.

c =  λ f = [1d-space] f = [1d-space-time]p 

Therefore, the two constituents of space-time cannot be separated in real terms because they are canonically conjugated. The equation of the speed of light c = λ f is intrinsic to any measurement of photon frequency and wavelength. Neither wavelength, nor frequency, can be regarded as a distinct entity – they both behave reciprocally and can only be expressed in terms of space-time, which is how human mind perceives energy with its limited senses. This knowledge is also basic to the new Gnosis of the Universal Law.

The wavelength and frequency of photons are the actual quantities of the two constituents, space and time, of this particular level of space-time. The measurement of any particular length [1d-space] or time f = 1/t in the physical world is, in fact, an indirect comparison with the actual quantities of space and time of a photon system of reference. The introduction of the SI system obscures this fact and that is why I have eliminated it in the new theory of the Universal Law.

At the same time I have proved in volume II, section 4. on wave theory and throughout the book that all systems and levels of space-time are superimposed wave systems that interact according to the laws of constructive and destructive interference, also defined in the new Gnosis as the Laws of Creation and Destruction. That is why they can be formalistically defined as U-sets. An U-set is a set that contains the Whole = Energy =Space-Time as an element and this is the theoretical, physical foundation of the current holographic model on earth as a distorted replication (mirror image) of the multiverse. This definition is done within mathematics which, itself, is the only method of definition and measurement of any physical quantity as I have explained with respect to the SI system.

All physical quantities are abstract mathematical ideas that are first created in the human mind and only then projected onto the surrounding physical world in a secondary manner when an experiment is performed. All physical experiments and their measurements, which should be reliable and reproducible, are based on the use of the SI system. This is basic physical knowledge and should be shared by everyone with a modicum of physical education from school.

The observation of Doppler effect in all wave systems which are in motion (all systems of All-That-Is are in motion) is an universal phenomenon because it is the manifestation of the reciprocal character of space and time. Since matter and photon space-time are of wave character, as I have proved in my previous publication where I derived the mass of matter from the mass of the basic photon by employing the Compton frequencies of the elementary particles (see also Table 1), the Doppler effect is the universal verification of this fundamental property of the primary term. This I have deduced in an axiomatic way from our consciousness in the new Physical and Mathematical Axiomatics of the Universal Law.

The Doppler effect is fairly simple to understand:

When a wave source and a receiver are moving relative to each other, the frequency observed by the receiver is not the same as that of the source. When they are moving towards each other, the observed frequency is greater than the source frequency; when they are moving away from each other, the observed frequency is less than the source frequency. This is the essence of the Doppler effect.

What is the interpretation of the Doppler effect in the light of the Universal Law? Let us consider the medium that is confined by the wave source and the receiver as a distinct system of constant space-time. For didactic purposes, we choose an electromagnetic wave, that is, we have a system of photon space-time, although our elaboration holds in any other medium. The space-time of the photon system is determined by the distance between the wave source and the receiver which is [1d-space]-quantity.

As long as the wave source and the receiver are not moving, the space of the photon system as measured by the distance is constant. In this case, the space-time of the system is also constant. This is also true for the time f = 1/t = reciprocal conventional time t of the photon system, which is the complementary constituent to space. Indeed, the observed frequency is constant when the distance to the receiver remains constant.

When the wave source and the receiver are moving towards each other, the space of the photon system decreases. In this case, it is irrelevant which one of them is responsible for this relative change of distance. As the space-time of the photon system that is confined by the wave source and the receiver is constant, its time (frequency) f should increase in a reciprocal manner. This relative change is observed by the receiver as an increase in the frequency of the emitted electromagnetic wave:

when [1d-space] →0, then f→, because f = 1/[1d-space].

When this phenomenon is observed for the visible light, the relative change of frequency is called violetshift.

When the wave source and the receiver are moving away from each other, the distance between them increases. In this case, the space of the photon system increases and its time decreases in a reciprocal way:

when  [1d-space] →∞then f→0, because [1d-space] = 1/f.

This change in the frequency is called redshift when observed for the visible light.

As we see, the reciprocity of space and time that is assessed by the Doppler effect can be adequately expressed with the number “1“. The Doppler effect is usually summarized by the following equation (1):

f′ = [(1±u/v) : (1±u/v)] fo = SP(A)fo

where u is the speed of the receiver relative to the space-time of the photon system (medium) and us is the speed of the source relative to the space-time of the photon system.

The above equation says that the relative change in wave frequency f′fo = SP(A) = time is a dimensionless number (time relationship) belonging to the continuum n = SP(A), also defined in the new theory of the Universal law as the statistical probability of the event A, SP(A). Both terms are identical descriptions of the primary term of human consciousness Energy =Space-Time within mathematics according to the primary axiom of the new Physical and Mathematical Axiomatics of the Universal Law.

This is the essence of physics and mathematics: all we can do in these disciplines is to build relationships between [1d-space]-, f-, or [nd-space-time]-quantities of selected systems of space-time and to obtain dimensionless numbers belonging to the continuum n.

The Doppler effect is basic to the new explanation of gravitation which I shall present in the next publication.  Until now conventional physics is unable to explain how gravitation exerts its force at a distance and this is one of the major fallacies of this natural science. For this reason gravitation cannot be integrated with the other three fundamental forces in the standard model. This shows how deficient this science truly is and why the physicists have failed to recognize the existence of the Universal Law much earlier. In the new theory I integrate gravitation with the other three forces as already shown in my previous article and also illustrated on one page in Table 1.

Notes:

1. For further information see the standard derivations of Doppler effect in PA Tipler textbook on Physics.

 

I.9. The Mechanism of Gravitation – for the First Time Explained

The Most Important Article on the Internet !

“In questions of science, the authority of a thousand is not worth the humble reasoning of a single individual.” – Galileo Galilei

Although modern physics has commenced with the measurement of gravitation (Galilei), it has been unable to develop a theory of gravitation that unifies this force with the other fundamental forces, such as electromagnetic, weak and strong forces. This shortcoming of physics is generally acknowledged. While gravitation has been elevated to mystery, physics has degenerated to an esoteric search for the hypothetical “graviton” through which this force should be mediated in empty space.

This cognitive misery of modern physics is self-inflicted – it stems from the wrong assumption that space is vacuum, in which gravitation is transmitted through hypothetical fields or particles as so called “long-range correlation”. None of the physicists so far has been fully aware of the fact that gravitational and electromagnetic fields are abstract mathematical concepts that have been introduced through human consciousness – the semantic (and not the experimental) search for their real meaning reveals that they are partial perceptions of photon space-time. The latter is an aggregated set that includes the level of gravitation, the level of electromagnetism, the level of weak forces and infinite other levels, of which we have no idea at present.

For this reason we speak in the new Axiomatics of infinite levels of space-time, whereas conventional physics reduces the physical world to only four forces (levels) in the standard model. As all parts of space-time are U-subsets that contain themselves as an element, the element being space-time, we enjoy the degree of mathematical freedom of aggregating the infinite levels of space-time to one level (space-time), two levels (axiom of reducibility), or n-levels of space-time (n = continuum = infinity).

Therefore, we need not know all the levels of space-time to describe the physical world. This task is impossible – one cannot depart from the parts which are infinite to define the Whole. This approach is a vicious circle, to which present-day physicists are addicted by defining the abstract physical quantities they have introduced in an a priori manner through mathematics with the help of other physical quantities, e.g. acceleration through mass, charge through current, etc.. This kind of physics is a Sisyphean labour – it does not enlarge our knowledge and is doomed to failure.

The inability of traditional physics to explain gravitation is a particular symptom of this cognitive malaise. The only correct approach from an epistemological point of view is to depart from the Whole to comprehend the parts. This is the essence of the Universal Law. As all levels manifest the properties of the Whole which is a closed entity (conservation of energy), we can aggregate the parts to appropriate sets and acquire the necessary information. This information consists only of space-, time-, or space-time relationships – it is equivalent to the continuum (1).

For instance, we can describe the visible universe – the total set of space-time that we can assess at present – as an interaction between two levels: the photon level and the gravitational level that includes all matter. The result of this dynamic interaction is the extent of the visible universe as a circumference, which is a basic cosmological constant that I first derived from the Universal equation (for further information see equation (37) in Volume II):

S= c2/G = [1d-space]-quantity

When expressed in meters, this quantity is a relationship to the anthropocentric surrogate of 1 m. The gravitational level incorporates all gravitational objects, such as planets, suns, white dwarfs, neutron stars, red giants, quasars, pulsars, solar systems, black holes, galaxies, including radio-galaxies, Seyfert galaxies, local groups and so on.

As we see the gravitational level can be subdivided into infinite levels as each of the aforementioned gravitational systems can build a corresponding level, e.g. planet level, solar level, galactic level etc. As all levels are open U-subsets that contain themselves as an element, and space-time is a closed entity, it is not possible to distinguish between these levels in real terms, that is, to separate them. Nevertheless, each abstract definition of a level that is a distinct object of thought has a real correlate in space-time because such thoughts are U-sets and contain themselves and the Whole (the primary term) as an element. Only N-sets, such as the idea of vacuum (the void, the nothing, that contains energy, something, as an element), that exclude themselves as an element have no real correlates and should be excluded from scientific thinking.

This preliminary philosophical introduction intends to liberate the reader from false expectations that have been nurtured for centuries in the cultural tradition of scientific agnosticism and have prevented scientists from understanding the mechanism of gravitation, beginning with Galileo Galilei, Newton, Kepler, Einstein to the present day. Although such expectations exhibit an astounding resistance to logical arguments, the simple mechanism of gravitation as presented below is an adequate remedy against this mental blockage – its simplicity is an aspect of the new axiomatic approach in physics which I first introduced in science with the discovery of the Universal Law.

The motion of planets or other gravitational systems is conventionally assessed by Kepler’s laws and Newton’s law of gravity. These laws are applications of the Universal Law for the space-time of gravitational rotation (see Volume II, chapters 3.5 & 3.6). In this context it is important to observe that

Any real motion in space-time is a rotation.

Let us now consider the rotation of the earth around the sun. The earth’s orbit is an ellipse with the sun at one focus. The closest distance to the sun is called perihelion rmin = 147.1×109 m, the farthest distance to the sun is called aphelion, rmax  = 152.1×109 m. The semimajor axis a equals half the sum of these constant distances a = 149.6×109 m. The numerical eccentricity ε of the earth’s orbit is ε = 0.016677. It is obtained from the linear eccentricity defined as the distance between the focus and the centre of the ellipse divided by the semimajor axis a:

ε = 0.5(rmaxrmin )/a = 0≤SP(A)≤1.

For the two distances we get: rmax = a(1 + ε) and rmin = a(1 – ε).

Image result for earth's orbit around the sun, aphelion and perihelion, graphics, pictures

This simple geometry is the method of definition and measurement of gravitation in classical mechanics. What is the epistemological background of this traditional geometric approach to celestial motion? The linear eccentricity Δr can be regarded as [1d-space]-quantity of a new gravitational system that results from the interaction between the sun and the earth (axiom of reducibility) – it is constant for each planet because it reflects the constant space-time of the resultant system.

The numerical eccentricity ε is a relationship of two [1d-space]-quantities that belongs to SP(A). It assesses the relative change of the space-time of the photon system that is confined by the earth during its revolution around the sun. The background of this conclusion is fairly simple. If ε approaches zero, the earth’s orbit will become a circle. However, this is not possible in the real physical world – it would mean that the space of the new system should be zero, that is, its space-time should also be zero. This never happens as all systems have energy and thus space-time.

This example illustrates why we never encounter ideal circular motion in the real physical world:

All real rotations of gravitational systems are ellipses or approximate this geometric form.

In the ideal case of circular motion, the distance of the earth to the sun would remain constant during its revolution. This would mean that there should be no relativistic change in the space-time of the photon system confined by the circular orbit of the earth with the sun at its centre because the radius of this orbit represents a constant distance for all points of the orbit to the sun. Therefore, if the planet would have an ideal circular orbit, there should be no Doppler effect between the earth as a source and the sun as a receiver.

In real space-time, the earth moves away from the sun when it revolves from perihelion to aphelion and approaches the sun when it revolves from aphelion to perihelion. Thus the actual orbit of the earth affects a relativistic change in the space of the photon system confined by the earth’s elliptical rotation. When the earth moves from perihelion to aphelion, the space of the photon system expands; when it moves from the aphelion to the perihelion the space shrinks. This relativistic change of the space leads to a reciprocal change in the time f of the photon system that can be assessed by the Doppler effect (for further information see previous publication).

Image result for earth's orbit around the sun, aphelion and perihelion, graphics, pictures

Before we proceed with our explanation of gravitation, we shall solve at this place a basic epistemological problem of conventional physics that hinders an understanding of gravitation in terms of the Universal Law. The earth’s approaching to the sun and its subsequent receding from the sun along its orbit can be regarded as distinct motions and described as attraction and repulsion. Thus any real rotation, such as gravitational rotation, consists of a period of attraction and a period of repulsion. The two phenomena, attraction and repulsion of celestial bodies, result from the reciprocal behaviour of space and time.

The same applies to the products of such rotations – the waves and oscillations that occur follow the Doppler effect. This can be illustrated with the following example. If a mass particle oscillates around its fixed point when a wave is propagated in a medium, we can describe the motion of the particle either as repulsion or attraction with respect to the fixed point (see also restoring force in Hooke’s law, Volume II).

We encounter the same phenomenon in electromagnetism. It is an established fact that charges with the same sign repel, while charges with opposite signs attract. Unfortunately, charge is an area – in most cases the cross-sectional area of the antinode (the position of maximal displacement in a standing wave system) – so that positive and negative signs of charges are pure convention within mathematics (see Volume II, chapter 6.2). They are mathematical symbols with which constructive and destructive interference of superimposed waves is formally assessed (see Volume II, chapter 4.3).

The elementary idea of “attraction“ and “repulsion“ in physics is an intuitive perception of the reciprocal character of space and time.

This fundamental new insight affects another significant simplification in our outlook of the physical world. This fact is totally confounded in present-day physics. The latter encounters insurmountable problems in providing a consistent interpretation of attraction and repulsion of charges in electromagnetism in contrast to gravitation where only attraction is considered, notwithstanding the fact that Coulomb’s law and Newton’s law of gravity are mathematically identical equations as I have proved in Volume II.

In reality, gravitational attraction is a one-sided perception of this force when it acts at a small distance, for instance, when an object is attracted by the earth in a “free fall“. In this particular case, the path of motion is given as a straight line. However, any translation in space-time is a portion of a larger rotation and thus a geometric abstraction of the latter. For instance when an object falls to the earth and the earth is rotating around the sun, the aggregated path of its free fall will not be a straight line pointing to the centre of the earth as gravitation is usually presented in classical mechanics, but a complex superimposed rotation with a reference to the sun.  As the free fall is rather short in terms of duration, there is not enough time to observe the period of attraction and the period of repulsion. We only observe the period of attraction from a limited human point of view. If we consider, instead, a comet that approaches the earth and then recedes away from it, we can describe the comet’s orbit in terms of attraction and repulsion.

As we see, these two terms are of anthropocentric origin – they represent unilateral, local perceptions of the reciprocity of space and time during rotation, which as the universal motion of space-time. From this elaboration, we come for the first time in the history of physics to the following fundamental conclusion:

There is no principal difference between gravitation and electromagnetism as levels of space-time. Both levels of space-time engender attraction and repulsion of systems during an interaction. Attraction and repulsion of gravitational objects and electric charges are a consequence of the reciprocity of space and time that manifests itself as rotations.

Note: Remember that all gravitational bodies have a charge (cross-sectional area) and each charged particle has a mass (energy measured as energy relationship to a reference system), that is, it is subjected to gravitation – therefore they cannot exhibit different properties.

This conclusion is of paramount cognitive importance for our further elaboration of gravitation and electromagnetism as both levels can be described as superimposed rotations (fluctuations) in terms of wave theory. The latter are the universal manifestation of the reciprocity of space and time.

This property is also described in philosophy as the dialectical principle, whereas this principle was first introduced in Antiquity and only much later exploited and totally obfuscated by the German idealistic (Hegel, Kant) and later on materialistic (as dialectical materialism) schools of philosophy. Currently there is a profound confusion in science regarding the reciprocity of space and time although this fundamental property is the only topic of the theory of relativity, which neither Einstein, nor all the physicists after him, truly understood.

Only after I discovered the Universal Law in 1995 was this fundamental property of space-time, perceived dialectically as space and time by limited human senses and consciousness, fully recognized and appreciated from a theoretical and cognitive (epistemological) point of view. It is very important to stress this fact at this place so that my readers can better understand the profound blunders that have infested this only exact natural science – physics – which modern humanity has to offer in order to explain the physical reality we live in.

Evidently, the space-time of the photon system confined by the earth’s orbit is subjected to relativistic changes when this planet completes one revolution around the sun. When the earth rotates from perihelion to aphelion, it moves away from the sun. We call this half of a revolution a period of repulsion. The escape velocity ve from the sun during this period is obtained from the tangential velocity of the earth – it is a vector defined by the straight line connecting the earth with the sun that points away from the sun (see parallelogram method of vector addition).

The tangential velocity of the earth alters its magnitude continuously during its revolution around the sun. The same is true for the escape velocity: ve begins to grow as soon as the earth leaves perihelion and achieves a maximal value ve(max), which is a specific constant of the planet, somewhere between perihelion and aphelion. After that it begins to decrease continuously and becomes zero at aphelion, because the tangential velocity is perpendicular to the major axis at this point. When the earth moves from aphelion to perihelion, we have the reverse situation.

In the period of attraction, the velocity of attraction va to the sun behaves as a mirror image to the escape velocity ve in the period of repulsion. The tangential velocity of the earth is the universal quantity of the kinetic space-time of this gravitational system. The relativistic change, to which the kinetic space-time of the earth is subjected during its revolution around the sun, is propagated to the space-time of the enclosed photon system. This change is mediated through the vertical energy exchange between this material system (planet) and the photon system.

The relativistic changes of space, time, or space-time during the vertical energy exchange between the rotating earth and the enclosed photon system can be assessed by the Doppler effect, which is the universal manifestation of the reciprocity of space and time as I have proved in my previous article. The gravitational force that occurs between the earth and the sun and determines the earth’s orbit is propagated through this vertical energy exchange as an “action at a distance“. The presentation of this interaction from a dynamic point of view is essential for an understanding of gravitation.

We ought to observe that neither Newton’s law of gravity, nor Kepler’s laws give any explanation of the actual mechanism of gravitation – these laws merely assess some secondary quantities of the gravitational level of space-time, such as force and acceleration. These laws have no epistemological background. This is considered a major deficiency of classical mechanics.

There are several didactic alternatives how to explain gravitation as vertical energy exchange between matter and photon space-time depending on the preferred quantities of the primary term. I shall implement here a mixed approach to gravitation by using the conventional quantities of classical mechanics, such as mass, density, acceleration, distance and velocity as to make it easier for all conventionally thinking physicists and laymen to finally understand the

mechanism of gravitation as a vertical energy exchange between matter and photon space-time.

Although I shall discuss gravitation from a dynamic point of view, the mathematical calculations that will be discussed are of static character. As physics has not yet developed a mathematical theory that describes space-time in a dynamic way, we are constrained to use traditional data. Besides, it is not the objective of this article to introduce novel dynamic methods of mathematical calculus in physics, but to prove that there is only one law of nature and that space-time has only two dimensions (constituents) that are canonically conjugated and behave reciprocally. Nevertheless, I shall show how such sophisticated methods can be principally implemented. Therefore my approach will be essentially epistemological and descriptive.

We begin our discussion with the primary axiom – space-time = energy exchange. When this axiom is applied to the earth as a particular gravitational system, it postulates that its space-time remains constant because it reflects the closed character of space-time. This is defined at present as conservation of energy (first law of thermodynamics). This aspect of space-time – to manifest itself in constant amounts (quants) of energy – is for instance assessed by Kepler’s second law of gravitation. It is applied geometry that assesses the constancy of space-time as constant area of the photon system encircled by the earth’s orbit, as I have explained in this publication with respect to the Pythagorean theorem:

Image result for earth's orbit around the sun, aphelion and perihelion, escape velocity, images

At the same time the earth is an open system – it interacts with the universe through its vertical energy exchange with the photon level. We can describe the earth as an input-output system that exchanges energy with the universe through the photon level, for instance, gravitational, electromagnetic and thermodynamic energy. This input-output process of vertical energy exchange is described by several conventional laws of thermodynamics, such as Stefan Boltzmann law and Wien’s displacement law. These laws describe the emission and absorption of photons by matter. I have discussed these applications of the Universal Law in detail in chapter 5.5 on thermodynamics in Volume II.

Thus emission and absorption of photons describes the vertical energy exchange between matter and photon space-time that takes place in both directions. As mass is an important quantity in mechanics – for instance, in Newton’s law of gravity the gravitational force FG is given as a function of the mass of the interacting objects – we shall use the quantity mass to explain the mechanism of gravitation.

As photons have a mass (see my previous publication), when an object of matter emits photons, it loses mass; when it absorbs photons, it gains mass. This input-output process is in balance for each system with respect to the universe, that is,

input (resorption) = output (emission).

This is the reason why space-time of systems is constant although they are open and exchange incessantly energy. When applied to material objects, this condition is called “blackbody radiation“ in thermodynamics. The concept is an N-set – it considers a blackbody as a closed system: “An object that absorbs all the radiation incident upon it has an emissivity equal to 1 (certain event) and is called a blackbody.“(2) This intuitive idea of the closed character of space-time in thermodynamics is basic to the definition of Stefan-Boltzmann law (see Volume II, chapter 5.5).

Indeed, all particular laws can only be defined when the properties of the primary term are considered. The mass of the photons depends on their frequency mphoton = mf , where mis the mass of the basic photon (for further information see here). As all systems are U-sets – they contain themselves, i.e., space-time, as an element – the mass of the basic photon mis part of the macroscopic mass Mmol of gravitational objects (see Volume II, equations (46), (46a) and (46b), and previous publication):

Mmol = mp(npr fc,pr + nn fc,n +ne fc,e)nNA,

where npr, nn, ne = number of protons, neutrons and electrons of the substance, and n = number of mols of the object.

In this elaboration, we can alternatively use Planck’s equation E = h f = EA f of photon energy without affecting the final conclusions.

Both Stefan-Boltzmann law of the power of radiation P = eσAT4 = Ef (Volume II, equation (80)) and Wien’s displacement law of the wavelength of maximal radiation λmax= B/T (Vol II, eq. (81)) assess the space-time, respectively, the space (wavelength) of the emitted photons, as a function of temperature T (chapter 5.5). I have proved in my previous article and in the section “Thermodynamics” in Vol II  that temperature is a quantity of time T = f (chapter 5.1).

The new Stankov’s law of photon thermodynamics confirms that any thermal gradient at the material levels leads to a corresponding thermal gradient at the photonic level during radiation, which is a specific vertical energy exchange between matter and photon space-time (see Vol II, chapter 5.7). With this law I have eliminated the insane idea of growing entropy (thermodynamic death (?)) in the universe which is also known as the second law of thermodynamics. This law is in blatant antinomy to the first law of thermodynamics postulating the conservation of energy and must be discarded as a false idea (see Vol. 2, chapter 5.6). Such paradoxes and contradictions have made physics to “fake science” and it is a conundrum to me why physicists are not aware of this fact and do something to improve their science.

In the present discussion, I shall not consider the energy exchange of the earth with the rest of photon space-time. I assume that the input is equal to the output (primary axiom). The same holds for the sun. We shall only describe the relativistic change in space and time of the enclosed photon system during one revolution of the earth around the sun.

However, we do not say that the earth is a closed system – we merely use the notion of the primary axiom in the sense of “ceteris paribus“ (other things the same). This is an a priori condition in any mathematical presentation of real space-time – for instance, we can only build equations under the condition of ceteris paribus. This abstract assumption is especially popular in economics (3).

When the earth moves from perihelion to aphelion, the escape velocity vincreases continuously to the maximal value ve(max) and after that decreases continuously to zero at aphelion. This relativistic change in the kinetic energy of the earth produces an equivalent change in the space-time of the expanding photon system confined by the earth’s orbit. This change is assessed by the Doppler effect fx = (1 – ve/c) fo, where fx is the actual frequency of the photons emitted from the earth to the photon system;  fo is the baseline frequency.

Based on the aforementioned geometric approach in celestial mechanics, fo is the hypothetical constant frequency of the photons, which the earth would emit if its orbit were an ideal circle, that is, when the numerical eccentricity ε is set zero. In this case, the distance of the earth to the sun would be constant – for instance, it can be set equivalent to the semimajor axis a (see above).

During the period of repulsion, the frequency of the photons emitted by the earth as a source continuously decreases with respect to the sun and the enclosed photon system as a receiver. The maximal redshift will be observed at ve(max). Moving from the point of ve(max) to aphelion, the redshifts of the earth will continuously decrease. At aphelion, there will be no redshift at all, because ve = 0 and fx = fo. The change in the frequency Δ f during the period of repulsion can be assessed by differential calculus. The maximal change Δfmax is achieved at ve(max). It is inversely proportional to the maximal linear eccentricity of the earth’s orbit (see equation of numerical eccentricity ε above):

ε  = Δr/2a = (rmax– rmin)/2a.

When the universal equation is applied as a rule of three, we obtain a simple relationship between the numerical eccentricity of the earth’s orbit and the change in frequency of the enclosed photon system:

ε = Δr/2a =  fo/Δ fmax = SP(A)

The maximal escape velocity ve(max) of each planet can be obtained from astronomic tables. From ve(max) and the maximal change Δ fmax we can determine the maximal redshift of the earth by calculating the Doppler effect.

We can now apply the same procedure to the period of attraction when the planet moves from aphelion to perihelion and determine the maximal velocity of attraction va(max). It will correspond to the maximal violetshift. If we use differential and integral calculus, we can calculate the magnitude of these quantities for each point of the planet’s orbit and thus determine precisely the relativistic changes in space (distance from the sun) and time (frequency) of the photon level during one revolution.

The frequency of the photons determines the energy of the photon system E ≈ f. The same is true for its density ρ. If we now apply the universal equation for one complete revolution, we obtain another valuable relationship:

Erepulsion : EAttraction = ρRepulsion : ρAttraction   =ve(max)  : va(max) = constant = 1

The space-time of the enclosed photon system changes relativistically within one revolution. From perihelion to aphelion, space continuously expands and photon frequency decreases in a reciprocal manner as observed by the redshifts. The density of the enclosed photon system decreases in the same manner and achieves its minimal value ρmin at aphelion.

This minimal density gradually increases during the period of attraction. The overall density of the period of repulsion is equal to that of the period of attraction. The same holds for the energy exchange and the maximal velocity of the two periods (conservation of energy).

The revolution of the earth around the sun can be regarded as an action potential or alternatively as an interaction between the earth and the photon system (axiom of reducibility). In this elaboration, we regard energy exchange between the sun and the photon system under the condition “ceteris paribus“. We apply the same condition to the energy exchange between the earth and the universe.

During one revolution of the planet, we observe the reciprocal behaviour of the LRC of the two contiguous levels (third axiom of application, see Axiomatics) – the level of matter, as represented by the earth, and the photon level, as represented by the enclosed photon system. When the earth moves from perihelion to aphelion, it emits photons with a decreasing frequency and mass mphoton = mf , that is, the earth loses continuously less and less mass to the photon system. As the input from the universe is unchanged, the earth, so to say, “gains weight“ during the period of repulsion. The planet exhibits maximal mass and density at aphelion, which is the farthest distance to the sun: rmax = [photon-space]max.

At this point, the enclosed photon system behaves reciprocally to the earth – its energy, LRC, mass and density, being proportional to the frequency of the emitted photons, reach their minimal values. According to Newton’s law of gravity, the gravitational force is proportional to the mass of the interacting objects. From this it follows that earth’s gravitation augments during the period of repulsion and achieves its maximal value at aphelion, where the mass of the earth is maximal. At this point, the attraction of the earth to the sun begins (period of attraction).

At the end of the period of attraction, that is, at perihelion, which is the shortest distance to the sun, the mass of the earth is minimal and the planet begins to move away from the sun. During the period of attraction, the earth emits photons with growing frequency (violetshifts) and mass: so to say, the planet begins to “lose weight“. At perihelion, the earth has a minimal energy, mass and density. At the same time, the enclosed photon system reaches its maximal energy, density and mass, and the smallest space.

The gravitational force between two objects is proportional to their mass and inversely proportional to their square distance as stated by Newton’s law of gravity. To compensate for the diminishing mass of the earth, the distance to the sun begins to augment, so that the overall gravitational energy remains constant. The earth begins to move away from the sun.

These descriptions are circumlocutions of the axiom of reciprocal behaviour of the LRC of contiguous levels which is a practical application of the universal reciprocity of space and time. This is one possible explanation of gravitation as a rotation with respect to the law of gravity.

Alternatively, we can describe the turning points at aphelion and perihelion with the restoring force in Hooke’s law (see Vol II, chapter 3.2). We can regard the space-time of the photon level as an elastic medium (ether). When the enclosed photon system expands maximally at aphelion, photon space-time at the opposite side of the earth contracts and develops a restoring force that brings the earth back to the sun. When the space-time of the photon system reaches its maximal state of contraction (maximal restoring force) at perihelion, it begins to expand by taking the earth with itself. This phenomenon can be observed in fluids and elastic matter.

Such didactic presentations are descriptive iterations of the basic property of space-time – the reciprocity of space and time. They visualize the mechanism of gravitation by showing that it obeys the Universal Law, which is ubiquitous in all physical phenomena. The mystery of gravitation is thus de-mythologized once and for all.

The revolution of the earth around the sun is a periodic event of constant space-time EA, which repeats infinite times E = Ef. If we regard the orbit of the sun as a revolution path around the centre of our galaxy, the Milky Way, we shall obtain for the earth’s orbit an eccentric wave oscillating around the sun’s orbit. This example shows that all gravitational rotations can be described in terms of superimposed waves, which are U-sets and contain themselves, that is, space-time, as an element. In this sense, we can regard the universe as the total set of all superimposed rotations which are systems or levels of the primary term. This holds for macrocosm and microcosm. The elementary particles can also be regarded as rotating systems of space-time (see Vol II, section “quantum mechanics”).

This presentation includes a new aspect that facilitates our understanding of gravitation dramatically. We depart for the first time in the history of physics from the vertical energy exchange between matter and photon space-time and show that it follows the Universal Law, just as any other energy interaction. The crucial fact is that photon space-time exhibits the same properties as matter, for instance, photons also have a mass which is energy relationship.

Current physics preaches instead that only matter has a mass, while photons are „massless“ particles. This novel explanation of gravitation was enabled by major breakthroughs in classical mechanics, wave theory, electromagnetism, thermodynamics and quantum mechanics as presented in Volume II. It shows that gravitation is a particular energy exchange, just as electromagnetism and heat, and can be consistently integrated with other forces (levels of energy exchange) as is shown on one page in Table 1. This simple interpretation of gravitation in the light of the Universal Law eliminates the search for the hypothetical “graviton” as obsolete and transforms physics from “fake science” to true science that departs from the primary term of our consciousness.

Notes:

1. It can be proven that Shannon’s definition of information is an iteration of the primary term.

2. PA Tipler, Textbook of Physics, p. 531 (older edition)

3. See, for instance, K. Lancaster, Introduction to modern microeconomics, Rand McNally College Publishing Company, Chicago, 1974, p. 12.

Nota bene: This article is defined as the most important publication on the Internet and in the scientific literature as it contains the explanation how to overcome gravity and create new technologies based on anti-gravity. This will be the greatest scientific revolution of this humanity.

 

I.10. How to Calculate the Mass of Neutrinos?

As physics cannot explain the quantity mass, it has produced a number of paradoxical statements that will merit the attention of future scientists as valuable documents on the intellectual confusion of this empirical discipline during the twentieth century. One of them is the dispute over whether neutrinos have a rest mass or not. This has led to the conduct of some expensive experiments (1).

In addition, it is generally believed that the destiny of the standard model of modern cosmology is closely linked with this question: the existence of neutrinos with rest mass would inevitably lead to the rejection of this model.

In section 9. (Volume II) I refute the standard model on the basis of the Universal Law. This example anticipates the results of the new cosmology. It is a leitmotif of the present volume that mass does not exist as a real physical property. It is an abstract quantity defined within mathematics and thus an object of thought. In terms of mathematics, mass is a relationship of the space-time (energy) of real systems. The actual reference system of space-time is the basic photon h, also known as Planck’s constant. All other systems are compared to it according to the principle of circular argument, which is an application of the principle of last equivalence for the parts.

This is the epistemological basis of the new Axiomatics that also holds for neutrinos. According to it, neutrinos have a mass (energy relationship) because all systems have an energy. As all real systems are open, that is, they interact with other systems, their space-time can be measured (compared).

The great problem of neutrinos’ research is to detect an interaction of neutrinos with other particles of matter and measure it precisely – such interactions are quite rare and require specific conditions. However, as all systems are open and interrelated (space-time is a prestabilized harmony), we can easily calculate the mass of neutrinos from quantum processes that involve these particles.

We shall propose a simple method of calculating the mass of neutrinos from a beta decay. This phenomenon involves the elementary particles of matter and is quite common. As their energy can be precisely determined, we can, for instance, calculate the mass (energy relationship) of neutrinos from the space-time of the proton and the neutron (see Table 1).

Before we shall discuss the method, we shall present a concise survey on the history of the discovery of neutrinos, as it is pathognomonic of modern physics. The discovery of neutrinos is closely linked to the closed character of space-time, which manifests itself as conservation of energy. This property of space-time is covered by the axiom of conservation of action potentials. It is important to observe that, although the conservation  of energy is now unanimously accepted as the 1st law of thermodynamics, there is still no theory that explains the conservation of energy from a cognitive point of view:

The theory of conservation of energy was based entirely on experimental observation. There existed no fundamental physical theory that predicted the conservation of total energy. Nor, in fact, does such a theory or equation exists now.“ (2).

The ubiquitous phenomenon of energy conservation can be explained for the first time in the history of physics with the new theory of the Universal Law that begins with the properties of space-time. As all systems of space-time are U-subsets that contain space-time (energy) as an element, they always manifest the properties of the whole, such as closed character (conservation of energy), continuousness, discreteness and openness. We shall show that these aspects of space-time are central to the discovery of neutrinos and the accompanying discussion.

At the turn of the 19th century, radioactivity of alpha, beta and gamma rays was discovered by Becquerel, Rutherford and others. This triggered the development of Bohr model (chapter 7.1, volume II). The gamma rays emitted during a nuclear decay were found to be monoenergetic. This energy interaction can be presented by a mathematical equation reflecting the principle of last equivalence:

Eγ = E– Ef ,

where Eγ is the energy of the emitted gamma photons, Ei is the initial energy of the radioactive nucleus and Ef is the final energy of the nucleus after radiation. The same result holds true for alpha decay as alpha rays have also been found to be monoenergetic. However, when a nuclear decay resulted in the emission of beta rays (electrons), it was found that they had a continuous energetic spectrum from zero, i.e., undetectable, to

Emax = E– Ef .

For the first time in the history of physics, an energy interaction did not allow the building of an exact mathematical equivalence:

Ebeta   ≤ Emax = E – E, respectively,

Efinal system   ≤ Einitial system.

This result triggered a profound theoretical crisis in physics. Unfortunately, it did not lead to the discovery of the Universal Law and the development of a novel axiomatics based on the principles of mathematical formalism, but to a partial solution, which has satisfied the modest mathematical expectations of physicists in this field.

In the new Axiomatics we clearly state that space-time is transcendental, so that any physical equivalence which we build, except the last one, is a mathematical approximation defined by abstraction and is based on the application of closed, real numbers. Any real equivalence is, on the contrary, transcendental and of infinite order. This means that any energy exchange involves infinite levels and systems of space-time. Due to our modest technical means, we can only register few levels and particles of space-time. Exactly this knowledge has been transmitted by beta decay.

When this energy exchange was discovered for the first time, it seemed to implicate the creation or annihilation of energy, thus violating the law of conservation of energy. Initially, Bohr and the majority of physicists were inclined to discard the law of conservation of energy on the ground that a general law, which had been founded on experimental results (in fact, this law has never been founded on validated experiments because there are no closed systems of space-time that can be observed with respect to this property of space-time; see also quotation above), should be rejected if a further experiment failed to confirm it.

Pauli, on the contrary, noted correctly that this would mean the discarding of all laws of energy conservation, which had been formulated in classical mechanics, for instance, the conservation of linear and angular momentum. If this should have been the case, it would have triggered the same foundation crisis in physics as the one observed in mathematics at the same time.

In 1930, Pauli suggested in a letter that the problem can be circumvented if the existence of a new particle should be postulated. It should have the following properties:

1. it should have no electric charge, that is, its cross-sectional area should be zero;

2. it should have a high ability to penetrate matter, that is, it should not interact with particles of matter;

3. its mass should be most probably zero, or nearly so, since beta rays with energies nearly equal (approximation) to Emax had been observed (recall that photons are still regarded as particles without charge (area) and mass).

If Bohr stands for the empirical dogma, Pauli stands for the priority of theoretical consciousness over empiricism. The reader may guess who has won at the end. However, this does not alter the fact that Pauli has been essentially wrong with respect to charge. In this case, he merely followed the central physical dogma based on complete agnosticism regarding the geometric nature of this quantity.

To appreciate how radical Pauli’s proposal was, one should bear in mind that at that time only two particles were known – the electron and the proton (see Bohr model, volume II). So to say, Pauli was the first to “invent“ a new particle. Based on the new Axiomatics, I am much more radical – I predict the existence of infinite systems and levels of space-time and thus abolish the standard model as reductio ad absurdum.

In 1933, J. Chadwick discovered the existence of neutrons. This encouraged Fermi to call Pauli’s particle “neutrino“, which means in Italian language “little neutral one“. Finally in 1956, the neutrino – in fact, it was an anti-neutrino – was registered in a reactor at Savannah River.

Today, it is generally believed that there are six different kinds of neutrinos: the electron neutrino υe, the myon-neutrino υμ and the tauon-neutrino υτ, and their corresponding anti-particles. The simplest beta decay associated with the occurrence of neutrinos is the decay of an unstable neutron n in a proton and an electron e:

n-decay  → p + e + anti-υe

During this nucleus decay a surplus energy Es = 0.782 MeV is observed. This energy is attributed to the electron-antineutrino(s).

Normally, it would be sufficient to know the magnitude of this energy to determine the mass of the antineutrino. The problem is that this decay exhibits a continuous distribution of the kinetic energy of the emitted beta-particles (kinetic electrons) from nearly zero to the maximal available energy. For this reason, it is only possible to postulate an upper limit of the energy of antineutrinos.

As these particles do not enter into energy interactions with other particles of matter, there is no possibility of determining their energy and mass in a direct way. These quantities can now be easily calculated from the known data of this beta decay by considering the mass mp of the basic photon h (see here). We shall only present the general approach and leave the tedious calculation to professional physicists.

The energy distribution of beta rays can be presented as a curve that can be regarded as an aggregated action potential (U-set) of the underlying beta particles which exhibit continuous, but discrete kinetic energies. We can determine the area under the curve, AUC (area integral), and present this quantity in terms of the aggregated charge (area) of the kinetic electrons.

Alternatively, the curve can be described in terms of statistics. It builds a peak that represents the maximum level of the emitted beta energy, that is, the maximum number of emitted electrons (electrons with the most frequent energy E). When this energy is compared with the maximal kinetic energy Emax of the emitted electrons, its magnitude is about one third of the latter: E= Emax/3.

The maximal energy of beta rays is given in special tables for each decay. Thus we can easily calculate the total distribution energy of beta rays ∑Ee of any nucleus decay from known data, for instance, as AUC. This total energy can be expressed by the universal equation as a function of the mass of the basic photon mp:

 ∑Ee =  ∑mpc=  mpc2∑fe

This equation confirms the universal character of mp which is a fundamental constant of the new Axiomatics – it helps unify all know fundamental constants in physics and thus all separate disciplines int his science such as gravitation with electromagnetism which was not possible before (see Table 1). The aggregated time of the beta rays ∑fe is given in comparison to the time of the electron at rest fe = fc,e = 1 (Compton frequency).

If we depart from the neutron decay in the equation above, we obtain for the energy and mass of the electron-antineutrinos the following simple equations:

Eanti-ν = En – (Epr + ∑Ee )

manti-ν  = mp ( fc,n –  fc,pr –  ∑ fe ) 

The only unknown variable in both equations is the sum (integral) of the frequency distribution ∑fe of the emitted beta particles. This quantity gives the relativistic increase in the energy of the electrons during beta decay in comparison to their rest energy. When such calculations are performed, it may transpire that the antineutrinos exhibit a similar curve of continuous energy distribution as observed for beta rays.

In order to prove the validity of the above equations, we shall use them to calculate the surplus energy Es and its mass (energy relationship) ms from neutron beta decay: In this case, we have to only substitute the aggregated time of the beta rays  ∑fe with the Compton frequency of the electron fc,e , which is the intrinsic time of this particle at rest (see chapter 7.1, volume II, and Table 1):

ms = mp ( fc,n –  fc,pr –  fc,e )    =  

 = 0.737×10-50 kg × 1.8934×1020  =  1.395×10-30 kg

*

Es = msc2 = 1.395×10-30 kg × 8.987×1016 m2s2  

= 1.253×10-13 joule = 0.782 MeV 

We obtain exactly the surplus energy Es of the neutron decay given above.

As we see, the only practical problem by the calculation of the neutrinos’ mass is to determine exactly the total energy of the beta rays in any nucleus decay involving neutrinos. This should not be a major problem to modern experimental physics, which is applied mathematics. This is another prospective test for the validity of the new Axiomatics and a proof for the obsolescence of fundamental experimental research.

Notes:

1. In June 1998, it was reported in the mass media that in an experiment performed in Hawai, neutrinos were found to have a mass. This “sensational result“ is a prospective, though superfluous, confirmation of the Universal Law and the new theory which proves that mass is a mathematical quantity – a relationship of the energy of two systems (axiom of reducibility) – so that every particle of space-time has a mass.

2. RA Llewellyn, Discovery of neutrinos, Essay in PA Tipler, Textbook on Physics, PA Tipler,  p. 218-220 (I have used an earlier edition of this textbook, so that the pages may have changed. Note, George).

Attachment:

Press Release of the Nobel Prize Committee

6 October 2015

The Royal Swedish Academy of Sciences has decided to award the Nobel Prize in Physics for 2015 to

Takaaki Kajita
Super-Kamiokande Collaboration
University of Tokyo, Kashiwa, Japan

and

Arthur B. McDonald
Sudbury Neutrino Observatory Collaboration
Queen’s University, Kingston, Canada

“for the discovery of neutrino oscillations, which shows that neutrinos have mass”

Metamorphosis in the particle world

The Nobel Prize in Physics 2015 recognises Takaaki Kajita in Japan and Arthur B. McDonald in Canada, for their key contributions to the experiments which demonstrated that neutrinos change identities. This metamorphosis requires that neutrinos have mass. The discovery has changed our understanding of the innermost workings of matter and can prove crucial to our view of the universe.

Around the turn of the millennium, Takaaki Kajita presented the discovery that neutrinos from the atmosphere switch between two identities on their way to the Super-Kamiokande detector in Japan.

Meanwhile, the research group in Canada led by Arthur B. McDonald could demonstrate that the neutrinos from the Sun were not disappearing on their way to Earth. Instead they were captured with a different identity when arriving to the Sudbury Neutrino Observatory.

A neutrino puzzle that physicists had wrestled with for decades had been resolved. Compared to theoretical calculations of the number of neutrinos, up to two thirds of the neutrinos were missing in measurements performed on Earth. Now, the two experiments discovered that the neutrinos had changed identities.

The discovery led to the far-reaching conclusion that neutrinos, which for a long time were considered massless (?), must have some mass, however small.

For particle physics this was a historic discovery. Its Standard Model of the innermost workings of matter had been incredibly successful, having resisted all experimental challenges for more than twenty years. However, as it requires neutrinos to be massless (?), the new observations had clearly showed that the Standard Model cannot be the complete theory of the fundamental constituents of the universe.

The discovery rewarded with this year’s Nobel Prize in Physics have yielded crucial insights into the all but hidden world of neutrinos. After photons, the particles of light, neutrinos are the most numerous in the entire cosmos. The Earth is constantly bombarded by them.

Many neutrinos are created in reactions between cosmic radiation and the Earth’s atmosphere. Others are produced in nuclear reactions inside the Sun. Thousands of billions of neutrinos are streaming through our bodies each second. Hardly anything can stop them passing; neutrinos are nature’s most elusive elementary particles.

Now the experiments continue and intense activity is underway worldwide in order to capture neutrinos and examine their properties. New discoveries about their deepest secrets are expected to change our current understanding of the history, structure and future fate of the universe.

_________________________________________

An Open Letter to the Orion “Nobel Prize Committee”

Dear Sir, 

don’t you realize how ridiculous you are? You are like a bunch of moles pretending to give prizes to bearers of light. Why don’t you come up to the surface and experience the light first hand. Why don’t you read the new physical theory of the Universal Law to understand the nature of Energy and All-That-Is. Why all these stupid prizes for proven blindness… Stop it before we shall stop this insanity with our ascension when the fools will be called fools and will become an object of ridicule to the whole humanity.

With best regards

Dr. Georgi Stankov

 

II. Wrong Space-Time Concepts of Conventional Physics and Their Revision in the Light of the New Axiomatics of the Universal Law

 

II.1. Space-Time Concept in Classical Physics

Like mathematics, physics has failed to define the primary concept of space-time in terms of knowledge. This principal flaw has been carried on in all subsequent ideas which this discipline has developed so far. The method of definition of space-time in physics is geometry. It begins with Euclidean space of classical mechanics.

The substitution of real space-time with this abstract geometric space necessitated the introduction of two a priori assumptions on space and time by Newton that have not been seriously challenged since. Otherwise, we would not witness the parallel existence of classical mechanics and the theory of relativity. If Einstein’s theory of relativity were a full revision of Newtonian mechanics, the latter would no longer exist.

In the new Axiomatics, we integrate all particular disciplines of physics into one consistent axiomatic system of physics and mathematics and thus eliminate them as separate areas of scientific knowledge.

There is no doubt that we cannot develop any scientific concept about the physical world without establishing a primary idea of space and time. Newton’s primary notion of space and time is documented in his Principles of Mathematics:

“Absolute Space, in its own nature, without regard to anything external, remains always similar and immovable. Relative Space is some movable dimension or measure of the absolute spaces; which our senses determine, by its position to bodies; and which is vulgarly taken for immovable space… And so instead of absolute places and motions, we use relative ones; and that without any inconvenience in common affairs; but in Philosophical disquisitions, we ought to abstract from our senses, and consider things themselves, distinct from what are only sensible measures of them. For it may be that there is nobody really at rest, to which the places and motions of others may be referred.”

“Absolute, True, and Mathematical Time, of itself, and from its own nature flows equably without regard to anything external, and by another name is called Duration: Relative, Apparent, and Common Time is some sensible and external (whether accurate or unequable) measure of Duration by the means of motion, which is commonly used instead of True time; such as an Hour, a Day, a Month, a Year… All motions may be accelerated and retarded, but the True, or equably progress, of Absolute time is liable to no change.”

From: I. Newton, Philosophiae Naturalis Principia Mathematica; translated from Latin by A. Motte, London, 1729.

Thus Euclidean space is the abstract reference surrogate of „absolute space“ to which all other physical motions are compared by the method of geometry according to the principle of circular argument. It is the primary inertial reference frame of all reference frames, in which Newton’s law of inertia (1st law) holds true. This law is an abstract tautological statement within geometry and cannot be applied to any real reference system – for instance, to a gravitational system which is always in rotation (Kepler’s laws) and exhibits a centripetal acceleration.

The reason for this is that Euclidean space has nothing to do with real space-time. Classical mechanics, which is based on this artificial space, contains no knowledge of the properties of space-time, as they are defined at the beginning of the new Axiomatics of the Universal Law.

According to Newton, space-time is “absolute, empty, inertial”, that is, free of forces, and can be expressed in terms of straight lines. These properties are summarized in his law of inertia postulating immobility (rest) or a straightforward motion (translation) with uniform velocity (a = 0) for all objects, on which no force is exerted. In this geometric space “absolute time is liable to no change”: f = 1/t = const. = 1.

In the Axiomatics I have proved that geometric space can only be built after we have arrested time within mathematics in an a priori manner. The law of inertia stays, however, in an apparent contradiction to Newton’s second and third law, and the law of gravity describing gravitational force as the origin of acceleration. While the first law is a mathematical fiction, the other laws of classical mechanics assess reality: there is no place in real space-time (universe), where no gravitational or other forces are exerted – for instance, we always observe rotations of celestial bodies (Kepler’s laws). As any rotation has an acceleration of a > 0, the law of inertia is not valid for rotations which are the only motions in space-time.

This paradox of classical mechanics justifies Max Borns estimation of Newton’s cardinal failure:

“Here we have clearly a case in which the ideas of unanalysed consciousness are applied without reflection to the objective world.”(1)

Since then, this remark can claim ubiquitous validity for the mindset of all physicists.

The question is why physics sticks to the law of inertia if it is an apparently wrong and abstract idea (idio) without any physical correlate, for instance, why it has not been abolished by Einstein in his theory of relativity? The explanation of this default is given by Max Born again:

“In Newton‟s view the occurrence of inertial forces in accelerated systems proves the existence of absolute space or, rather, the favoured position of inertial systems. Inertial forces may be seen particularly clearly in rotating systems of reference in the form of centrifugal forces. It was from them that Newton drew the main support for his doctrine of absolute space.” (2)

The basic paradigm behind the law of inertia is rather trivial: if a rotating body would move free of force in empty space, it would conserve its uniform tangential velocity expressed as straight line (vector) for ever. This property of the objects, called “inertia“, is regarded an a priori faculty that is inherent to matter.

This idea immediately evokes another principal objection:

“The law of inertia (or persistence) is by no means as obvious as its simple expression might lead us to surmise. In our experience we do not know of bodies that are really withdrawn from all external influences: and, if we use our imaginations to picture how they travel in their solitary rectilinear paths with constant velocity through astronomic space, we are at once confronted with the problem of the absolutely straight path in space absolutely at rest…” (3)

Let us recall that the existence of straight parallel lines has not been proven in geometry (check Euclid’s parallel postulate). As space-time is closed, all subsets of it manifest this property and perform rotations, which can be described by closed geometric figures, such as a circumference (closed [1d-space]) or a spherical surface (closed [2d-space]). This is a basic tenet of the new Axiomatics with which, in particular, quantum mechanics can be integrated for the first time with classical mechanics.

In addition, any rotation is a system of space-time that can be assessed in terms of force, acceleration (electric field), or any other abstract quantity of space-time = energy. This is another basic statement of the new Axiomatics which I have proved for all levels of space-time that have been described by physics so far.

This fact is reflected in Lobachevsky’s geometry (also known as hyperbolic or non-Euclidean geometry), which reduces Euclidean space to a partial geometric solution.

From this analysis of the space-time concept of classical mechanics, we can conclude:

1. The introduction of Euclidean space for real space-time by Newton is the primary epistemological flaw of classical mechanics. The properties of this geometric space are:

a) emptiness (no forces, no acceleration);

b) homogeneity;

c) the existence of straight paths (lines)

d) absoluteness of space and time – no change of space and time magnitudes (immobility or translation).

2. These properties of Euclidean space are embodied in the law of inertia, which is an erroneous abstract idea without any real physical correlate. This law builds a basic antinomy with the other laws of mechanics, which assess real forces, accelerations and rotations.

3. While the absoluteness of space and time in classical mechanics is rejected by the theory of relativity (see the following publications), the homogeneity of space-time, which is tacitly accepted by the same theory, is refuted by quantum mechanics.

4. However, these disciplines make no effort to define the properties of the primary term of space-time in terms of knowledge. For this reason, classical mechanics still exists as a separate discipline, although the basic antinomy of physics appears in a disguised form in the initial-value problem (deterministic approach of classical mechanics) versus Heisenberg uncertainty principle of quantum mechanics (intuitive notion of the transcendence of space-time; see Volume II, chapter 7.3, p. 315).

This line of argumentation will be followed in the next publications discussing further blunders and contradictions in the concept of space-time of conventional physics.

Notes:

1. M. Born, Einstein’s Theory of Relativity, Dover Publ., New York, 1965, p. 57-58.

2. M. Born, p. 78

3.  M. Born, p. 29-30

 

II.2. The Concept of Relativity in Electromagnetism

The partial correction and further development of Newtonian mechanics was done by Einstein – first, in the special theory of relativity and then in the general theory of relativity. The latter is the basis of modern cosmology. However, the origins of the theory of relativity were laid in electromagnetism and this concept is meaningless from an epistemological point of view without considering the concept of ether.

The main achievements in electromagnetism (Maxwell, Lorentz) are based on the firm belief that ether exists and is another form of substance, which fills empty Euclidean space, that is, it should substitute empty space. The further development of the ether concept, leading to its refutation, has furnished the two basic ideas of the theory of relativity:

1. Light has a constant finite velocity for all observers;

2. The ether, which has been regarded as an invisible elastic matter, substance, or continuum, where light is propagated, cannot fulfill the expectations attributed to the absolute, static Euclidean space of mechanics (see previous publication). Because of this, there is no possibility of proving the principle of simultaneity that has been considered valid in classical mechanics. Instead, it has been found that all phenomena appear to be relative for any observer with respect to space and time.

It was Einstein’s accidental stroke of genius to realize the full importance of this simple fact. Before we proceed with Einstein’s theory of relativity and explain why he failed to discover the universal field equation” (read here), we must first discuss the precursors of the concept of relativity in electromagnetism.

From a cognitive point of view, electromagnetism has always been a dualistic theory. At the time when Huygens established the electromagnetic wave theory, Newton already supported the concept of particles. The dispute between these two opposite views was very stimulating and triggered the first measurements of the speed of light. As early as 1676, Römer was able to measure the speed of light from astronomic observations with an astounding degree of precision (c = 299 792 km/s).

Bradley discovered in 1727 another effect of the finite speed of light, namely, that all fixed stars perform an annual rotation due to the revolution of the earth around the sun. Since Foucault (1865), we know that the speed of light in air is greater than its speed in any other medium. This is the first confirmation of the maximal finite speed of light in “empty space“.

The major objective of electromagnetism, which evolved in the meantime into a separate discipline beside classical mechanics, was to find an explanation for the propagation of light in empty space as introduced by Newton in mechanics. If light were a transversal wave, as most experiments indicated, then it could only be propagated in an elastic medium, as the theory of optics preached at that time by Fresnel who was a deeply spiritual person and thus a great exception as a Frenchman.

These considerations led to the development of the ether concept. This concept is of central theoretical importance, for it is a synonym for the primary term. I have shown in Volume II, chapter 3.2 that the General continuum law is the differential form of the Universal Law in elastic medium, from which the classical wave equation (Volume II chapter 4.5), Maxwell’s four equations of electromagnetism (Vol II, chapter 6.13) and Schrödinger’s wave equation of quantum mechanics (Vol II, chapter 7.2) have been derived within mathematics.

The ether concept was the most elaborated intuitive perception of the primary term prior to the discovery of the Universal Law. Its refutation on the basis of the Michelson-Morley experiment in 1887 was a consequence of the failure of the ether concept to exclude all false properties attributed to the primary term since the introduction of Euclidean space in classical mechanics. The Michelson-Morley experiment embodied the vicious circle of empirical agnosticism, to which physics had been subjected before the Universal Law was discovered and proved to be true in physics and bio-science in 1994-1995.

The projection of the properties of Euclidean space to ether led to the following cognitive outlook of electromagnetism:

  • ether was a real, absolute reference system of material character analogous to absolute, abstract Euclidean space as introduced by Newton.
  • Therefore, ether was defined as a static, that is, immovable” (Newton) elastic medium that filled the empty space of mechanics.
  • In this medium, light was propagated with the speed of c.
  • All other motions could be set in relation to this real immovable reference system of absolute character.

The objective of the ether hypothesis was not only to provide a logical explanation for electromagnetism from a cognitive point of view, but also to eliminate the empty Euclidean space of classical mechanics that caused numerous theoretical problems to the physicists at that time which they could not reconcile with empirical evidence. The aim of Michelson-Morley experiment was to prove this hypothesis.

Before I discuss its results, I shall explain why this hypothesis, which was on the right track, must be refuted from a theoretical point of view.

The ether concept incorporates the dualistic view in optics and classical mechanics, whereby medium and waves are considered as two distinct entities (N-sets). This is the classical epistemological flaw one regularly encounters in conventional physics.

N-set is a mathematical or any other set of elements that excludes itself as an element. For instance the vacuum, the void, is an N-set as it contains, according to current failed physics, all the elementary particles which have energy and mass, i.e. they are something, while the void is nothing. Another example of an N-set is the set of all “2” numbers that excludes itself as an element as it is one (1) set. All rational numbers are thus N-sets as they exclude the continuum as a continuous entity, while all transcendental numbers are U-sets that contain themselves and the whole, the continuum, as an element.

However, humanity has failed so far to develop a transcendental mathematics. With the discovery of the Universal Law I paved the way for the development of such advanced mathematics that properly assesses All-That-Is. I have discussed these theoretical problems of mathematics in Volume I and Volume II in detail and resolved them while abolishing the foundation crisis of mathematics in 1995.

For this reason physics has made a veritable mental salto mortale (full somersault) by declaring the vacuum to be “energy-rich”, from which the elementary particles are created according to certain symmetry rules. This is another epic idiocy (idio) of the standard model of physics.

For the first time in the new Axiomatics, all real systems and levels of space-time are regarded as U-sets that contain themselves and the Whole = energy = space-time = the primary term as an element. They can only be distinguished in the human mind by means of mathematics, but not in real terms. This is a recurrent motif of the entire new theory of science of the Universal Law.

When we apply this fundamental axiomatic knowledge to ether, we must conclude that there is no possibility of distinguishing between motion as wave and medium. I have shown in Volume II that the wave equation is derived by considering the rotation of the particles in the medium.

In the new Axiomatics, motion is a synonym for the primary term = space-time = the (elastic) continuum (principle of last equivalence). The definition of its basic quantity, velocity, is axiomatically derived from it as one-dimensional space-time within mathematics (Axiomatics, point 21.). Therefore, we can write the following equivalence with respect to ether:

ether as medium = continuum = photon space-time =

= c = c2 = LRC = cn  = constant 

This equation simplifies our understanding of the concept of ether and relativity to an extraordinary extent. It says that [1d-space-time] is constant for each level of space-time – for example, the constant speed of light is a specific [1d-space-time] quantity of the constant photon space-time. However, constant space-time is in incessant motion – constancy of space-time and its motion do not exclude each other, but are equivalent, complimentary aspects of the primary term.

Bearing this in mind, it is easy to understand why the result of the Michelson-Morley experiment has led to the refutation of the ether concept, embodying the cognitive flaws of Newtonian mechanics, and at the same time confirmed the nature of space-time as defined in the new Axiomatics.

The ether hypothesis tested by this experiment can be summarized as follows:

if the ether were a real, immovable system of reference, the measurement of the speed of light in a moving (rotating) system, such as the earth, would give different magnitudes for c, depending on whether the light is moving with the earth’s rotation or in the opposite direction.

However, neither Michelson nor Morley could find any change of c with respect to the earth’s rotation. This correct result on the constancy of space-time, as manifested by the velocity c of the photon level, has led to the absolutely wrong conclusion that the earth is “immovable with respect to ether“.

However, the earth itself is a rotating system – it revolves around its axis, around the sun and so on (superimposed rotation). Therefore, this gravitational system cannot be immovable in absolute terms.

As the speed of light c remains constant, the same must hold for the ether. It cannot be an immovable entity – an absolute reference system at rest, as expected in terms of Euclidean space.

Unfortunately, instead of rejecting the empty space of classical mechanics and modifying the ether concept, the consequence of the Michelson-Morley experiment was the refutation of the ether, that is, of photon space-time, as a real level and its substitution with the concept of the void (vacuum), where c-dependent actions at a distance” are observed as long-range correlations (LRC), which are mediated through hypothetical fields such as electromagnetic and gravitational fields.

This experimental interpretation marks one of the darkest periods of modern physics, pushing this discipline in entirely the wrong direction for more than a century, until the Universal Law was finally discovered in 1995 and all known partial physical laws were integrated by this law as its specific mathematical applications.

The interpretation of the Michelson-Morley experiment led to the development of the special theory of relativity. In fact, Einstein learned about the Michelson-Morley experiment only after he had already established the special theory of relativity. The interpretation of the theory of relativity in terms of this experiment is a posteriori adaptation of historical facts to serve human needs for linear time chronology.

The rejection of ether has cemented the dogma that space-time is empty and homogeneous, where photons, being particles with the energy E=h f, but having no mass (?), propagate with the speed of light, which is utter nonsense as I have proved beyond any doubt. The dogma that particles move in vacuum is based on the assumption that N-sets exist and is thus a cardinal epistemological flaw in physics.

Departing from the nature of space-time, I exclude all scientific concepts that are N-sets. In this way I eliminate all paradoxes of science that culminate in the famous continuum hypothesis of mathematics.

The origins of the theory of relativity were laid in electromagnetism when it became obvious that space and time were two canonically conjugated constituents of space-time that behave reciprocally.

Read hereWhy Space-Time = Energy Has Only Two Dimensions (Constituents) – Space and Time (Full Article)

This reciprocity is an aspect of the constancy of space-time as manifested by the parts:

as [space-time] = constant = 1, then [space] = 1/[time] = 1/f.

This follows from the primary axiom. The knowledge of the actual reciprocity of space and time is vested in the historical empirical observation that the quotient of electron area (charge) and mass

e/me = SP(A)e /SP(A)m = 0≤SP(A)≤1

is decreasing with growing velocity v = [1d-space-time] = E.

Within the new Axiomatics, this phenomenon can be immediately solved. As mass is a space-time relationship built in an abstract way when the energy (space-time) of a system, such as the electron, increases relativistically, its space-time relationship, that is, mass, will also increase with respect to the constant reference unit of 1 kg.

This phenomenon was interpreted somewhat clumsy by Lorentz who postulated that the spherical form of the electron flattened in the direction of its movement, so that the mass increased in terms of density. He considered FitzGerald’s interpretation of Michelson-Morley experiment – it suggested that the earth contracted in the direction of its revolution. This would have explained why Michelson and Morley did not find any difference in c depending on the earth’s motion.

In this experiment, the location of the observer was linked to the earth or rather he was part of the earth. For this reason the observer was not in a position to determine the relative contraction of the earth. If the observer had been placed outside the earth, that is, in photon space-time, he would have measured a relative contraction of the earth in the direction of rotation.

FitzGerald proposed a simple factor of proportionality, with which this length contraction could be calculated:

γ-1 =  √(1-v2/c2 ) = √(c2 – v2)/c=√(dLRC/LRCp) =

= √(SP(A)relative/SP(A)reference) =

= [1d-space-time]rel/[1d-space-time]ref 0≤SP(A)≤1

I call this factor in the new theory of the Universal Law the proportionality factor of Lorentz transformations”, or simply the Lorentz factor, because it is basic to his relativistic presentation of space and time in electromagnetism.

The above equation shows that:

The Lorentz factor γ-1 is an iterative mathematical presentation of Kolmogoroff’s probability set 0≤SP(A)≤1 as defined according to the principle of circular argument within mathematics. The initial system of reference is photon space-time as expressed by the LRC = c2, to which the relativistic change of space-time of the systems dLRC is set in relation.

It is indeed amazing that neither Lorentz, nor Einstein or any other physicist after them has comprehended this simple methodological fact, namely that all mathematical equations in the theory of relativity are actually presentations of the probability set 0≤SP(A)≤1 in statistics, while the latter is another variation of the continuum set in mathematics. I have discussed this theoretical aspect in detail in Volume I and also in Volume II. In my next article I will refer one more time to the true essence of the theory of relativity as applied statistics to space-time.

Lorentz derived this factor from FitzGerald’s length contraction and applied it to time dilution. He was the first to speak of the “local time” and “local space” of objects that change in a relativistic manner in the direction of movement.

In terms of the ether hypothesis, FitzGerald’s length contraction and Lorentz time dilution indicate that when space and time are measured in moving objects, they will have different magnitudes compared to those measured in relation to absolute immovable ether, that is, to the space-time magnitudes measured in relation to themselves from a static point of view (building of the certain event within mathematics).

In this way, the relativity of space and time, which is objectively observed and assessed by the Lorentz factor, has given birth to the theory of relativity.

In this process, both the absolute unchangeable space of classical mechanics and the concept of ether in electromagnetism have been abolished. They have been substituted by a hermaphrodite concept of space-time in the theory of relativity which is generally accepted today. It combines the emptiness and homogeneity of Euclidean space as vacuum (void) with the reciprocal behaviour of its constituents as assessed by the Lorentz factor in the electromagnetic theory of relativity.

Furthermore, the general theory of relativity postulates that this space-time is “bent“ (curved) by gravitation. There is, however, no explanation as to how this energy interaction is mediated in the void, or by the void, because neither classical mechanics, nor Einstein’s general theory of relativity, proposes any theory of gravitation. This fact demonstrates the provisional character of Einstein’s theory of relativity.

The mechanism of gravitation was explained for the first time stringently in the new theory of the Universal Law by employing all relevant knowledge and experimental data from classical mechanics, electromagnetism, theory of relativity and quantum mechanics.

Read hereThe Mechanism of Gravitation – for the First Time Explained

Before the discovery of the Universal Law, the old physics was unable to integrate gravitation with the other three fundamental forces (read here). This deficiency of the standard model is generally recognized by all theoreticians, which explains why more than 50% of all theoretical physicists nowadays work on improving the standard model in their research activities as they officially write on their websites.

This stark fact clearly shows how incomplete and provisional this science has been from its inception to the present day and that is why it is incomprehensible to me why the physicists exhibit such a pathological, fear-driven resistance to the popularisation of the new theory of the Universal Law in the last two decades since Volume I on physics and mathematics was first published in the summer of 1997.

 

II.3. The Space-Time Concept of the Special and General Theory of Relativity

In 1905, Einstein realized that Lorentz transformations were not artificial presentations of the local space and time of electromagnetic systems, but were fundamentally linked to our very understanding of space-time. While the principle of relativity as expressed by the Lorentz factor is still believed to be of purely theoretical character, the constant speed of light c is a well-established fact.

In the first step, Einstein refuted the principle of simultaneity inherited from classical mechanics and substituted it with the principle of relative simultaneity. This “new“ insight was a delayed discovery. Since Galilei, who first discovered and measured gravitation and thus founded modern physics, it took more than three centuries to realize this simple fact, although the relativity of space (position) and time has been a central theme of philosophy since antiquity.

The principle of relativity is a consequence of the properties of space-time. As space-time is closed, we can arbitrarily select any system as a system of reference and compare any other system to it according to the principle of circular argument. This is how the SI system and its units were introduced in physics, however without understanding this fundamental theoretical fact.

Read hereWhy Space-Time = Energy Has Only Two Dimensions (Constituents) – Space and Time (Full Article)

This means that there is no “absolute space” and “time”, as Newton introduced in classical mechanics, but only specific magnitudes (relationships) of the two constituents of space-time = energy for each system and level. This is a consequence of the inhomogeneity (discreteness) as another fundamental property of space-time (see Axiomatics).

The principle of simultaneity reflects the open character of the systems of space-time as U-sets – any local interaction is part of the total energy exchange in the universe (= primary term). In the Axiomatics I have proved that all systems of All-That-Is are U-sets and contain themselves and the Whole as an element. The principle of simultaneity is thus an intuitive, albeit unprocessed, notion in physics that space-time is a unity which is the cognitive foundation in the new Theory of Science of the Universal Law. It proves that all known particular physical laws are derivations and manifestations of one law of nature.

Therefore, it is not a coincidence that when Einstein discovered this principle in physics, all avantgarde movements in Europe were discovering the principle of “simultanéité” in arts and poetry (see volume IV). Today, we speak of globalization and regard the earth as a village. Tomorrow, if we survive, we shall expand this feeling to the universe by implementing the theory of the Universal Law. This is the anticipated evolution of human consciousness, before it becomes an active part of the universal consciousness of space-time (1)

The two postulates of the theory of relativity are well known.

  •  The first one is the principle of relativity which says that there is no preferential inertial reference frame: natural law(s) is (are) the same in all inertial systems.
  • The second postulate concerns the principle of the constant speed of light. The speed of light c in vacuum is constant in any inertial reference frame and does not depend on the movement of the object, or alternatively: each observer measures the same value for the speed of light in vacuum.

This is the traditional presentation of Einstein’s postulates, which one can find in numerous textbooks on physics and the theory of relativity.

It is, indeed, amazing that until now nobody has noticed the intrinsic paradox between the two postulates. This is a classic example of the cognitive blindness of modern physics with respect to its basic concepts. The paradox emerges from the use of the concept “inertial reference frame“. This term is introduced in conjunction with the law of inertia.

This law can only distinguish between a uniform motion (a = 0) and a motion with acceleration (a > 0). Per definition, all inertial reference frames should move uniformly or stay at rest otherwise the first law is not valid.

Does this mean that the principle of relativity does not hold in accelerated systems? Obviously not, for exactly this contradiction ought to be eliminated by Einstein’s second postulate. It says that the speed of light remains the same, independently of the movement of the observer. This postulate does not discriminate between a uniform motion and a motion with acceleration.

From this, it is cogent that there is a fundamental paradox between the first and second postulate of the special theory of relativity.

How can we avoid this paradox? This paradox is actually eliminated in the general theory of relativity, which is based on the principle of equivalence:

“a homogenous gravitational field is completely equivalent to a uniformly accelerated reference frame.“(2)

This principle acknowledges the simple fact that there are no real inertial reference frames. For this reason, in the special theory of relativity, Einstein substitutes the concept of the inertial reference frame which is an object of thought without a physical correlate with the real reference frames – the local gravitational potential glocal = LRCG. For instance, the gravitation of the earth is such a real reference frame. It is equivalent to an accelerated system, for example, to a rocket with the same acceleration as g, but launched in the opposite direction. This is a frequent example, with which the principle of equivalence is explained in conventional textbooks on physics.

There are two major cognitive aspects of this principle that should be elaborated. Firstly, there are infinite real reference frames because there are infinite celestial objects in space-time with specific gravitational fields or potentials (LRC, long-range correlations). Secondly, this principle holds only in motions with uniform acceleration and does not consider motions with changing acceleration. In the latter case, the motion is regarded as consisting of infinite small segments of uniform acceleration.

As we see, the infinity of real reference frames is basic to the principle of equivalence. It is an intuitive notion of the infinity of space-time. This is also evident from the name of this principle which is an intuitive, albeit unconscious, perception of the principle of last equivalence which is the first and only a priory axiom of the new Axiomatics of the Universal Law.

Indeed, Einstein’s idea of equivalence reflects the principle of last equivalence of our Axiomatics when applied to the parts as the principle of circular argument. Any definition of a mathematical equivalence is based on this principle. This has not been understood in theoretical mathematics as embodied in its foundation crisis which I first resolved in 1995 and thus saved modern science from this theoretical peril that hang like the sword of Damocles over the mesh heads of all scientists, even though they preferred to close their eyes and neglect this peril for many decades.

We come to an important conclusion:

The principle of equivalence of the general theory of relativity is an application of the principle of circular argument. It also consists of building equivalences and making comparisons. This is the only objective of this discipline and of physics as a whole.

Evidently, when the theory of relativity is taken to its logical end (which Einstein obviously failed to do), it leads to the rejection of the law of inertia. This is inevitable in the light of the new Axiomatics. However, this law has a rational core that should be spelled out for the sake of objectivity.

From a mathematical point of view, Newton’s first law of inertia is a special case (borderline case) of the second law: F = ma; if a = 0, then the resultant force is zero F=0 and we have the condition of the first law. The law of inertia holds only in reference frames free of forces, that is, in empty space. However, there is no empty space – space-time is continuous. As space-time is equivalent to energy, there is no place in All-That-Is that is free of forces and where the law of inertia could be valid.

What is the epistemological background of this law in the light of the new Axiomatics? Very simple! The Universal Law departs from the reciprocity of space and time, where space-time (energy) is proportional to time: E ≈ f. If time approaches zero f → 0, then space-time will also approach zero: E ≈ f → 0. In this case, space will approach infinity [space] →∞. This infinite space will be homogeneous because its discreteness is a function of time f : discreteness = f → 0.

The magnitude of such an abstract space can be formally presented by means of straight lines (paths) within geometry because the radius of this hypothetical rotation will be infinite: r → 0. Under these boundary conditions, space-time will acquire the properties attributed to empty Euclidean space, as they are embodied in the law of inertia.

From this we conclude:

The law of inertia is a mathematical abstraction (object of thought) that describes the hypothetical boundary conditions of space-time:

when E ≈  f = discreteness → 0, then

[space] → ∞ = homogeneous, empty space =

= Euclidean space (straight lines) 

The actual theory of relativity is an application of Lorentz transformations of electromagnetism, with which the space-time of all material objects is mathematically assessed, while at the same time photon space-time is regarded as an empty, homogeneous entity. This mathematical presentation of space-time and its abstract quantities, such as mass and momentum, is called “relativistic”. Hence the terms: relativistic energy, relativistic mass and relativistic momentum.

These quantities are built within mathematics according to the principle of circular argument by selecting photon space-time as the initial reference frame without comprehending the theoretical implications of this fundamental decision. This is a leitmotif of all my writings on the Universal Law.

Read alsoWhy Space-Time = Energy Has Only Two Dimensions (Constituents) – Space and Time (Full Article)

When FitzGerald length contraction and Lorentz time dilution are expressed within the theory of relativity, we immediately recognize that the Lorentz factor γ-1 is another mathematical presentation (iteration) of Kolmogoroff‟s probability set (see previous publication):

 tR/t = L/LR=  γ-1 = √(1-v2/c2 ) =  0≤SP(A)≤1

 when v → 0, then  γ-1   1,

 when v c, then  γ-1   0,

In the above equation  tR is the rest time between two events (Note: all events are action potentials), also called “local” or “own time”, that is measured in a system at rest; t is the diluted time measured in an accelerated reference system. Analogously, LR is the length of a system at rest, and L is its contracted length under acceleration.

The Lorentz factor  γ-1 assesses the relativistic change of space and time, that is, of the space-time of the systems in motion. Recall that all systems are in incessant motion. This is also the basic conclusion of the theory of relativity, namely, that all objects are in relative motion. From the above equation, it becomes evident that:

the Lorentz factor gives the physical probability space:

 γ-1 =  0≤SP(A)≤1

This is a fundamental conclusion of the new Axiomatics that rationalizes the theory of relativity to applied statistics of space-time.

The probability set of all space-time events, being action potentials, is set in the Lorentz transformations in relation to the LRC of photon space-time:

LRC= UU =  c= [2d-space-time].

When we substitute conventional time t with time f = 1/t in the above equations we obtain the Universal Equation as a rule of three (see equation (38-5) in Axiomatics):

E1/E2 = f1/f2 = [1d-space]2 / [1d-space]1  =

tR/t = L/LR=  γ-1 = √(1-v2/c2 )K1,2 = SP(A)

This is the whole theoretical background of Einstein‟s theory of relativity – be it special or general. It is a partial and inconsistent intuitive perception of the Universal Law within mathematics. After being revised, it is integrated into the new Axiomatics. In this way we eliminate this discipline as a distinct area of physical knowledge.

For this purpose I shall explain in the next publication the two basic terms of the theory of relativity, rest mass and relativistic mass, in terms of the new Axiomatics, as their wrong conventional interpretation is the main source of the cognitive malaise which afflicts physics today.

Notes:

1. The comprehension and active implementation of the theory of the Universal Law is not only a highly intellectual act – it is decisively determined by the mediality of the individual. The latter depends exclusively on the age of the soul of each individual. At present, human mediality is on the verge of an evolutionary jump, which will profoundly change human consciousness. However, only old souls, at the end of their incarnation cycle, will profit from this evolutionary jump, which represents a profound energetic transformation of the human individual. This process, known as the light body process, LBP, which is now running at high speed, has no direct impact on the majority of young souls that populate the earth at present. It will only change their “weltanschauung”. I have dedicated a special book on this subject of human Gnosis “The Evolutionary Leap of Mankind“.

2. Textbook on Physics. PA Tipler, p. 1132. (This reference is from an earlier edition of this textbook and the page numbers may have changed in this latest edition.)

 

II.4. The End of Einstein’s Theory of Relativity – It Is Applied Statistics For the Space-Time of the Physical World

Rest Mass Is a Synonym for the Certain Event.

Relativistic Mass Is a Synonym for Kolmogoroff’s Probability Set

By proving that mass is an energy relationship, I have shown that Einstein’s equation postulating the equivalence between energy and mass is a tautological statement. This equivalence plays a central role in the theory of relativity and in physics today.

While in classical mechanics mass is defined in a vicious circle as the property of the gravitational objects to resist acceleration, in the theory of relativity mass is regarded as being equivalent to matter, while the term energy is restricted to photon space-time. This is the epistemological background of Einstein’s equation:

E=mc2 , or m = E/c2 = Ex / LRCp.

According to the principle of circular argument, the energy of any object of matter Ex is compared to the energy of the reference system, in this case, to the level of photon space-time LRCp, and is given as an energy relationship m (as mass).

This relationship can be regarded statically or with respect to the own motion of the object. In the first case, this quantity is defined as rest mass m0, in the second case, as relativistic mass mr.

Within the theory of relativity, the two quantities are expressed by Lorentz transformations:

E =Ekin + m0c2 = m0c2 / √(1v2/c2 ) = γm0c2 = mrc2     

This is the equation of the total relativistic energy E, which is given as the sum of the kinetic energy Ekin and the rest energy E0 = m0c2. We use this equation because it includes the relationship between the relativistic mass and the rest mass: m= γm0.

The above equation is the relativistic expression of Einstein’s equation E = mc2. It reveals that the quotient of rest mass m0 and relativistic mass mr is another pleonastic presentation of the physical probability set within mathematics (see also previous publication):

m0/mr =  γ-1 =  0≤SP(A)≤1

We encounter the principle of circular argument again – the theory of relativity can only define the quantity “relativistic mass of an object“ in relation to “the mass of the same object at rest“. Both quantities are abstract subsets of space-time that are built within mathematics. So is their quotient, the Lorentz factor γ-1 – it represents the continuum, respectively, the probability set.

When we compare the rest mass with itself, we obtain the certain event:

m0/m= m0= SP(A) = 1

Rest mass and relativistic mass are thus abstract quantities of space-time (space-time relationships) that are built within mathematical formalism.

Rest mass is the abstract intrinsic reference system of the observed relativistic mass (principle of circular argument). It symbolizes the certain event m0 = 1.

Relativistic mass gives the real space-time of any system in motion. As all systems are in motion, we can only observe relativistic masses. The relativistic mass is defined in relation to rest mass (principle of circular argument).

As mass is a space-time relationship, any relativistic mass of a system is greater than its rest mass: mr > m0. Their quotient represents the physical probability set:

m0/mr =  γ-1 =  0≤SP(A)≤1

This equation is derived by the principle of circular argument and includes the entire cognitive background contained within the two basic terms of the theory of relativity, rest mass and relativistic mass, which has not been realized either by Einstein or any other physicist after him.

The theory of relativity could, indeed, be very simple once the right axiomatic approach is employed – the new Axiomatics of the Universal Law.

“ Everything should be made as simple as possible…. but not simplistic. “  Albert Einstein

 

III: Why Modern Cosmology Is a Fake Science

 

III.1. Modern Cosmology Revised in the Light of the Universal Law – a Critical Survey

Today I was made aware of a heated dispute that is raging in the high ranks of modern cosmologists regarding the wrong assumptions on which this new science “cosmology” is based. In volume I, and much more extensively so in volume II, I have discussed the basic theoretical tenets of modern cosmology and explained why it is an utterly fraudulent science – precisely a “fake science” – even more so than its older sister physics.

In my series of theoretical articles on physics published in March and April this year,

I have already shown why the fundamental concept of dark matter in modern cosmology is one of the greatest blunders in science. Physicists have failed to understand their own definition of mass, which they use in all their other definitions and theoretical disquisitions, from a methodological and epistemological point of view. When properly interpreted it becomes obvious that the physical quantity “mass” is an energy relationship and not an intrinsic property of matter. As all systems of All-That-Is have energy, which is per definition the primary term of human consciousness for All-That-Is, all systems also have a mass. Period!

This would say that photons also have a mass and are not “massless” particles as conventional physics claims nowadays. I have proved not only that photons have a mass but that the mass of all elementary particles can be very easily calculated from the mass of the basic photon which is a fundamental natural constant I first discovered in 1995 (see Table 1). I have presented these derivations and the theoretical background in my full article proving that energy = space-time has only two dimensions – space and time, which in itself is the biggest revolution in science:

Why Space-Time = Energy Has Only Two Dimensions (Constituents) – Space and Time (Full Article)

Present-day cosmologists have adopted this greatest blunder of all in physics, namely that photons do not have a mass, only because physicists have failed to grasp their definition of mass from a theoretical point of view and have perpetuated this blunder into veritable insanity in the field of modern cosmology. Because of their rejection of photon mass, they are unable to account for 95% of the theoretically calculated mass in the universe with respect to the mathematical models they have developed for All-That-Is as macro-cosmology. This fundamental blunder has necessitated the introduction of a plethora of further flaws and contradictory concepts that have made modern cosmology into a real joke and a total negation of rational, logical human thinking. The confusion is so big that only those who are not trapped in it can approximately comprehend it. For those who are embroiled in their insane world of pseudo-science, there is no hope.

I refer here to the heated debate that has recently exploded among the insane inmates of the small asylum called “Modern Cosmology” as this overview article explains. I will publish the full article below for the sake of completion:

Stephen Hawking And 32 Top Physicists Just Signed a Heated Letter on The Universe’s Origin. Sh*t just got real.

Fiona MacDonald, 12 MAY 2017

 “For centuries, people have puzzled over how our Universe began. But the heat just got turned way up on a debate that’s quietly been raging between cosmologists, with 33 of the world’s most famous physicists publishing a letter angrily defending one of the leading hypotheses we have for the origin of the Universe.

The letter is in response to a Scientific American feature published back in February, in which three physicists heavily criticised inflation theory – the idea that the Universe expanded just like a balloon shortly after the Big Bang. The article went as far as claiming that the model “cannot be evaluated using the scientific method” – the academic equivalent of saying it isn’t even real science.

In response, 33 of the world’s top physicists, including Stephen Hawking, Lisa Randall, and Leonard Susskind, have fired back by publishing their own open letter in Scientific American. The Cliff’s note version is this: they’re really angry.

Inflation theory was first proposed by cosmologist Alan Guth, now at MIT, back in 1980. It’s based on the idea that a fraction of a second after the Big Bang, the Universe expanded rapidly, spinning entire galaxies out of quantum fluctuations.”

Here we have the usual suspects and forgers of modern science exposed by their names. I have discussed these models as early as 1995 shortly after some of these weird hypotheses were first published, such as the so called “inflation theory” which, by the way, has very much in common with the inflationary debt fiat currencies of the fraudulent Orion monetary system. Now more than two decades later the chicken come home to roost.

Charlotte, who made me aware of this article summed it up excellently: “Scientists fighting for relevance now that their foundation has been exposed as false. Your moment of acknowledgement is approaching George!” Let us hope she is right and in the meantime patience is the mother /father of all ascended masters.

Here I would only comment on one chief forger named in this article – Stephen Hawking. This person actually does not exist – he is an empty holographic image of the dark ones who use his false reputation to promote all kinds of dark theories that fit their plans to install the NWO and confuse the minds of the people with rogue scientific concepts of despair. I am not sure if he is really alive or a clone or something else as he is unable to communicate directly but allegedly through a machine that reads his thoughts. Go figure!

I saw him personally in 1998 in a scientific conference in Berlin/ Potsdam and even then he did not seem real to me. Since then he has not attended any conference according to my knowledge and is kept in the shadow from where his puppet masters publish regularly obscure scientific comments in his name that only serve their purpose. So much about this rogue personality that has the nimbus of the greatest fraudster-scientist of modern times. However he has a lot of predecessors as rogue representatives of fake science in this darkest pisspot on earth – GB – as I have proved beyond any doubt in the General Theory of Science and Gnosis as presented on this website in 15 books and several thousand articles.

Below I will publish my introduction to modern cosmology where I discuss the major false assumptions of this fake science in the light of the new theory of the Universal Law. I wrote this article in 1996 first in German and then translated and expanded on it in English in 1998.

Modern Cosmology in the Light of the Universal Law (revised essay from 1996-1998)

While physics has evolved to become a study of particular levels and systems of space-time that are closely associated with human demands, one would expect that cosmology has been developed into a study of the primary term when the principle of last equivalence is considered. This is, however, not the case when one analyses the few acceptable textbooks on this discipline.

The outstanding feature of modern cosmology is the lack of a clear-cut definition of its object of study – the universe, space-time, energy, or cosmos – is described in a vicious circle in the same mechanistic and deterministic manner as are its systems and levels in physics. Similarly, cosmology has failed to develop an epistemological approach to space-time as an entity consisting of only two dimensions / constituents – space and time. Nevertheless, there is a subconscious pattern behind all cosmological concepts that constitutes an intuitive perception of the primary term. This is a consequence of the fact that human consciousness always abides by the Universal Law.

The objective of this short survey on modern cosmology is to reveal this aspect. As we cannot consider all heterogeneous schools and ideas of this discipline, we shall restrict ourselves to the standard model of cosmology (which is different from the standard model in physics) that represents the mainstream of cosmological thinking today. Based on the Universal Law, we shall reject this model and debunk the present system of cosmology. The remaining mathematical facts will be integrated into the new Axiomatics.

Modern cosmology is a new discipline. It began in the twenties of the last century when the general theory of relativity was being developed as a geometric study of empty space-time and applied to the universe as an ordered whole by Einstein, Lemaître, de Sitter, Friedmann etc. Its core is the standard model, a collection of heterogeneous ideas which have been put together in a similar manner to that in the standard model of physics. Hence the same name as first suggested by Weinberg  in 1972.

The standard model of cosmology is a hot expanding world model based on the following primary ideas:

1. The universe is homogeneous and isotropic on average, at any place, at any time. This is called the “cosmological principle“. This philosophical concept is basic to any cosmological approach. It is an application of the principle of last equivalence – the primary term is perceived in the same way by anybody, at any time, at any place. This allows the establishment of an objective Axiomatics that leads to the unification of science – the latter being a metaphysical level of space-time. This is essentially an anthropocentric definition because for obvious reasons we have no idea of how other conscious beings (aliens) perceive the physical world.

The cosmological principle, being a rudimentary idea of the primary term, was first introduced by Milne (1935) and then further developed by Einstein as a variation of his principle of equivalence (see Volume II, chapter 8.3). Einstein departed from the Mach principle. It postulates that the inertial reference frames adopted from classical mechanics should be regarded in relation to the distribution and motion of cosmic mass, that is, in relation to the actual space-time relationships (1). Einstein generalized Mach principle (as he did with the relativity of space and time in electromagnetism developed by Lorentz and other physicists before him) and applied it to the whole universe. Einstein has never had a truly original idea of his own.

This was an arbitrary decision (degree of mathematical freedom), since the local space-time relationships which we observe are heterogeneous and discrete. Indeed, the universe consists of clusters of galaxies separated by photon space-time which is empty of matter, as is confirmed by recent astronomic evaluations, for instance, by the Hubble telescope. Therefore, the cosmological principle, which postulates a homogeneous and isotropic universe, does not assess the real properties of space-time, but is an abstract equivalence that is built within mathematical formalism. This fact reveals the absurdity of Einstein‟s endeavour to exclude human consciousness from any scientific perception of the physical world (2).

2. The universe expands according to Hubble’s law with the escape velocity v of the galaxies, which is proportional to the distance dl of the observer from the galaxies:

 dv = dl/dt =  Hol = [1d-space-time],

Hubble’s law is an application of the Universal Law for one-dimensional space-time.  Ho is called the Hubble constant. It is reciprocal conventional time and thus a constant quantity of time:  Ho= f. The epistemological background of this constant is not known in cosmology. We shall prove that this specific magnitude gives the constant time of the visible universe:  Hfvis.

In astrophysics, the Hubble constant is roughly estimated from the intensity of selected galaxies. Its value varies from author to author from 50 km/s to 80 km/s per Mpc (megaparsec). Latest estimations tend towards the smaller value. The reciprocal of the Hubble constant 1/ H is called “Hubble time“ and is thus an actual quantity of conventional time. It is regarded as the upper limit of the age of the universe AU ≤ 1/ Ho when the gravitational forces between the galaxies are ignored. As the traditional cosmological units of space and time are highly confusing, we shall convert them into SI units. This will significantly simplify our further discussion.

The cosmological unit of distance [1d-space] is:

1 Megaparsec (1 Mps) = 3.086×1022 m.

We obtain for the Hubble time (= age of the universe) the following conventionally estimated value:

 A= 1/ Ho  = 3.086×1022 m / 5×10ms-1 = 6.17×1017 s

This corresponds to an estimated age of the universe of 20 billion years. According to the standard model, the present universe has a “finite“ age that is determined by the big bang; this initial event is defined as a “space-time singularity”. This assumption is in apparent contradiction with the primary axiom of our Axiomatics which says that the universe, that is, its space and time, is infinite.

At present, the actual age of the “finite universe“ is estimated to be about 10 – 15 billion years, when the gravitational forces between the galaxies are theoretically considered. However, as the mass of these galaxies cannot be determined – more than 90% of the estimated mass of the universe is defined as “dark matter“, which simply means that scientists do not know anything about it (see the calculation of neutrinos’ mass here) – these estimations are of highly speculative character.

It is important to observe that all basic space and time magnitudes in cosmology, such as the Hubble constant, can only be roughly estimated. This fact shows that present cosmology is anything but an exact empirical science. As these quantities are basic to the standard model, fundamental paradoxes have emerged, depending on the values employed. I refer to the famous “mother-child-paradox” in cosmology that describes the finding that some galaxies as children are older than their mother – the universe – if the big bang hypothesis of finite age of the universe is accepted. This is already a strong indication that the standard model is not validated at all.

From AU one can easily obtain the radius of the finite universe RU as postulated in the standard model. According to Hubble’s law, the actual magnitude of the second constituent of the universe is defined as the maximal distance that can be observed, that is, the maximal distance which the light that is emitted from the remotest galaxies covers before it reaches the observer:

RU = cAU = 2.9979×108 ms-1 × 6.17×1017 s = 1.85×1026 

According to Hubble’s law, both values are natural constants. While this fact confirms the constancy of space-time (universe) as manifested by its systems – in this case, by the visible universe – it is in apparent contradiction with the assumption that the universe “expands“.

Modern cosmology does not give any explanation of this obvious paradox between Hubble’s law and the hypothesis of the expanding universe as put forward in the standard model.

A major objective of this section on cosmology in volume II is to prove that:

The two magnitudes, RU and Ho = 1/AU, are universal cosmological constants that assess the constant space-time of the visible universe. When modern cosmology speaks of the “universe“, it means the space-time of the visible universe, which is a system (U-subset) of space-time. The visible universe is not identical to the primary term of space-time (energy = universe = All-That-Is).

The primary term cannot be assessed in a quantitative way, but only in philosophical and meta-mathematical categories. Thus the visible universe is a specific, concrete cosmological system of space-time. It determines the limits of human knowledge at present. Therefore,

the visible universe is the only possible object of study of cosmology.

Like any other system, it has a constant space-time – it is a U-subset that manifests the properties of the whole. For this reason, its space (RU) and time (Ho = 1/AU) magnitudes are natural constants. As space-time is an open entity, we shall prove that these constants can be precisely calculated from known space-time constants which can be exactly measured in local experiments. In this way we shall eliminate the necessity of performing expensive research of doubtful quality in astrophysics.

While proving that modern cosmology can only assess the constant visible universe, we shall refute the erroneous hypothesis of an expanding universe from an infinite small space of incredible mass density, called the “big bang“. This state is believed to have existed about 15-20 billion years ago.

According to this view, the universe has evolved from this “space singularity“ to its present state by expansion which still persists.

3. The standard model describes this past and present expansion of the universe. This model is based on Hubble’s law and the existence of the cosmic background radiation (CBR). The latter is regarded as a remnant of the initial, extremely hot radiation of the big bang that has been adiabatically cooled down to the present temperature of 2.73 K. The theoretical basis of this hypothetical, hot expansion model is the theory of relativity, which is geometry applied to the visible universe and deals essentially with the level of gravitation (see Einstein’s cosmological constant in Volume II).

Therefore, the method of definition and measurement in cosmology is mainly geometry (topology) of space. In addition, the statistical method is used. The standard model is highly limited to philosophical introspective, for instance, it forbids questions like:

“Where does the universe expand?

Where does the space which opens between the expanding galaxies come from?“,

and so on.

In other words, this model evades any questions that should trouble the mind of any sincere cosmologist and deals with a true knowledge of the universe.

The standard model cannot explain many facts that have been accumulated in the last few years. For instance, new measurements by the COBE telescope have confirmed that the CBR is not isotropic and homogeneous as postulated by the standard model, but exhibits a local anisotropy. These conflicting facts have necessitated further modifications of the standard model.

The so called “inflation hypothesis“ has been developed by Guth and Linde (see article below) to overcome the problem of CBR-anisotropy, which is of major theoretical importance. However, this hypothesis is of such a speculative character that it cannot be verified by any means. It rather exposes cosmology as science fiction. ( I wrote this conclusion in 1996, 21 years before this dispute occurred in cosmology this year.)

For this reason the inflation hypothesis is not considered part of the standard model, but a complimentary conceptual contribution of provisional character. The standard model excludes alternative cosmological explanations, such as the steady state-models of Bondi (1960) or Dicke (1970). These models reflect more adequately the constant character of space-time. As these models do not represent the mainstream of cosmological dogma, they will not be discussed in this short survey on cosmology.

Notes:

1. ”Einstein adopted, as Mach‟s principle, the idea that inertial frames of reference are determined by the distribution and motion of the matter in the universe”. P.J.E. Peeble, Principles of Physical Cosmology, Princeton University Press, New Jersey, 1993, p.11.

2. Einstein believed that natural laws existed independently of human consciousness. The logical reversion of this belief is that consciousness does not follow natural laws – hence his pledge for the elimination of subjective human consciousness from science. This epistemological antinomy is inherent to modern scientific outlook. The role of consciousness in defining all scientific concepts in an abstract manner, which are confirmed in a secondary manner in the real world, is eliminated from current scientific considerations. Instead, empiricism is celebrated as the only source of knowledge.

However, it still operates in an unpredictable manner at the subconscious level as human intuition. In the new Axiomatics, we eliminate this artificial antinomy by proving that consciousness is a system (level) of space-time that obeys the Universal Law, just as any other system or level. All primary concepts which have been historically developed in science reflect more or less the Universal Law. Unfortunately, this intuitively correct perception is frequently lost at the alleged rational level of current human argumentation – be it scientific or trivial. This is particularly the case with all non-mathematical ideas of science. The hidden psychological force behind this rejection of the Universal Law at the rational level is the “angst (anguish) structure“ of human beings, which is of rigid energetic character and determines their illogical thinking and behaviour to a great extent. I have elaborated this energetic aspect of human behaviour in a special book on esoteric Gnosis based on the Universal Law “The Evolutionary Leap of Mankind“.

Attachment:

Stephen Hawking And 32 Top Physicists Just Signed a Heated Letter on The Universe’s Origin

Fiona Macdonald, 12 May, 2017, Sciencealert

Read alsoStephen Hawking among 33 scientists on offensive against critics of popular universe origin theory

 

III.2. Hubble’s Law Is an Application of the Universal Law for the Visible Universe

The equation of the Hubble’s Law as presented in the previous publication on cosmology shows that this cosmological law is an application of the Universal Law and assesses one-dimensional space-time according to the definition of the new Axiomatics:

dv = dl/dt =  Hol = [1d-space-time]

As the Hubble constant His a natural constant, the law assesses the constant space-time of the visible universe as the maximal particular system of All-That-Is that is accessible to human senses and material instruments:

dv = dl/dt =  Holmax = [1d-space-time]

The proof is fairly simple. According to Hubble’s law, the maximal escape velocity dv which a galaxy reaches before it emits a light signal to the observer is the speed of lightdv→c. As Hubble’s law claims universal validity, it also holds for escape velocities that are greater than c. In this case, the light emitted by galaxies with dv > c will not reach the observer because the speed of light is smaller than their opposite escape velocity. The resultant speed (space-time) of the emitted photons is negative with respect to the observer, that is, such photons will never reach the observer but they still exist and should be considered in cosmology.

As our information on any material celestial object in the universe is transmitted through photon space-time, galaxies with a higher escape velocity than the speed of light are no longer visible to the observer. This means that there is an event horizon of the visible universe, beyond which Hubble’s law still holds true, but can no longer be observed. The validity of Hubble’s law beyond the event horizon also follows from the fact that it is an application of the Universal Law of space-time, while the visible universe is a particular system thereof.

The event horizon determines the boundaries of the visible universe with respect to human cognition. The boundaries of the visible universe are determined by the magnitude of c because photon space-time is the ultimate level of space-time which we can perceive at present. As all levels of space-time are U-subsets and contain themselves as an element, we cannot exclude the possibility that there are further levels beyond photon space-time with a higher velocity than c. If we gain access to them, we shall enlarge our event horizon of the visible universe.

As we see, the event horizon assesses the space of the visible universe with respect to our senses and present level of technological development. This cosmological system can be expressed as [1d-space]-quantity, for instance, as radiusRU (open straight line), circumference SU (closed line), or KS= SP(A)[2d-space] =spherical area = charge, in geometry (method of definition = method of measurement).

As in all other systems, these quantities are constant: they assess the constant space of the visible universe with the constant time of Ho. We conclude:

Hubble’s law assesses the constant space-time of the visible universe:

dv = dl/dt =  Holmax =  HoR→c = [1d-space-time]vis= constant

The maximal distance from the observer lmax is defined as the radius of the visible universe: lmax = R. In cosmology, one usually speaks of the “universe“. Whenever we use this term from now on, we shall mean the “visible universe“, which is a system of space-time and is thus not identical with the primary term.

From the radius of the universe, we can easily obtain the event horizon of this basic cosmological system as KS (the surface area of the visible universe as a sphere) within geometry:

Event horizonKS = SP(A)[2d-space] = 4πRU= constant

This quantity is constant for any observer in space-time. This practical equivalence is an aspect of the cosmological principle. In this case, the cosmological principle is a U-subset of the principle of last equivalence for the system “visible universe“ – it is an application of the principle of circular argument and is thus not identical with the primary axiom. This clarification is essential for the subsequent refutation of the standard model of cosmology as hot expanding hypothesis.

 

III.3. The Cosmological Outlook of Traditional Physics in the Light of the Universal Law

The hot expanding hypothesis of the standard model in cosmology assumes that the universe, as observed today, has evolved from a state of homogeneous energy with a negligible space and incredible density which exploded in a small fraction of a second. This initial state of the universe is described as the “big bang“. Since then, the visible universe – recall that cosmologists can only perceive the visible universe – is believed to have been expanding incessantly. For further information on the “big bang” hypothesis and how this bogus idea was introduced historically in science read also this article:

The “Big Bang” Is Yet to Come in the Empty Brain Cavities of the Cosmologists – Two PAT Opinions

In the context of this cosmological outlook, Hubble’s law is interpreted as a “law of expansion“.As this law is an application of the universal equation, we must reject this cosmological interpretation on axiomatic grounds. I have shown that Hubble’s law assesses the constant space-time of the visible universe (see my previous article). The two natural constants that are derived from this law, the radius of the visible universe RUand Hubble time assessing the age of the expanding universe 1/ H= AU =  1/fvis, give the constant space and time of the visible universe and confirm this conclusion. In this way I eliminate the first basic pillar of the standard model – the interpretation of Hubble’s law as a law of universal expansion.

We shall now present additional proofs for this irrefutable conclusion. The idea of the expanding universe is a consequence of the faulty idea of homogeneous space-time in the theory of relativity. I have shown in the new physical and mathematical theory of the Universal Law (volume I and volume II) that Einstein had not completely corrected the empty Euclidean space of classical mechanics, but had only introduced the reciprocity of space and time for the systems of matter.

Einstein regarded the gravitational objects as embedded in empty and “massless” photon space-time defined as vacuum, which is an absolutely wrong idea (I have proved that photons have mass and thus eliminated another epic blunder of present-day cosmology – the existence of “dark matter“- that alone makes it to fake science). With respect to the reciprocity of space and time, he assumed in the general theory of relativity that vacuum could be curved or bent by local gravitation. The current interpretation is that the path of light is attracted by local gravitational potentials and for this reason cannot be a straight line in space.

When this space-time concept is applied to cosmology, it inevitably leads to the neglect of the finite lifetimes of stars, as they have been described by Chandrasekhar and have been only later verified in modern astrophysics. The finite lifetime of any gravitational system is a consequence of the energy exchange between matter and photon space-time.

The new Axiomatics clearly states that all systems, being superimposed rotations, have a finite lifetime which is only determined by the conditions of constructive and destructive interference. During this vertical energy exchange, the space-time of the material levels, such as atomic level, electron level, thermodynamic level etc., is transformed into the space-time of the photon level and vice versa.

Photons have a much greater space than that of the particles of material levels, as can be demonstrated by the [1d-space]-quantities of their elementary action potentials: the Compton wavelengths of the electron, λc,e=2.4×10-12 m, the proton, λc,pr=1.32×10-15  m, and the neutron, λc,n=1.32 ×10-15 m are much smaller than the wavelength of the elementary action potential hof the photon level, λ= 3×10m, or more precisely, in the order of their intrinsic time – the specific Compton frequencies (see Table 1).

The [1d-space]-quantity of the elementary action potential is a specific constant of the corresponding level. It assesses the specific space of the level. During vertical energy exchange between two levels, the expansion of space-time changes discretely in specific constant quantitative leaps. These leaps can be assessed by building space and time relationships between the levels (the universal equation as a rule of three). Such constants are dimensionless numbers. In the new Axiomatics, I call them “absolute constants of vertical energy exchange“ (see Volume II, chapter 9.9).

When we observe vertical energy exchange only in one direction, for instance, from matter to photon space-time, this process is perceived as an explosive expansion of space-time. This is precisely the current cosmological view.

The thermonuclear explosion is a typical, albeit more trivial, example of an energy exchange from the nuclear level towards the photon level also defined as radiation. This process is associated with an extreme space expansion described as a “nuclear wave“. The reason for this is the extremely small space of the hadrons compared to the expansion of the emitted photons during nuclear explosion, as has been demonstrated by the corresponding time magnitudes of these systems of space-time – the Compton frequencies or alternative by their intrinsic space constants – the Compton wavelengths (see above).

When this vertical energy exchange is observed in the direction from photon space-time to matter, it manifests itself as a contraction of space. Black holes are a typical example of extreme space contraction and for that reason they are circumscribed as “space singularities“. Initially, black holes were believed to only “devour“ space and matter. However, this would be a violation of the law of energy conservation (1st law of thermodynamics).

Later on, it has been proven (within mathematics, because black holes cannot be directly observed) that black holes emit gamma radiation at their event horizon and thus obey the axiom of conservation of action potentials (see Axiomatics), just like all other systems of space-time. This has eliminated the spectacular character of these celestial bodies. For this reason the Russian term for black holes “frozen stars” is more appropriate.

The mean frequency of gamma radiation of black holes fH can be presented as a function of the intrinsic time fof the elementary particles of matter:

mp  fH = m( fc,e + fpr,e + fn,e ) /3

The high temperature of black holes is another quantity of material time – of the time of the thermodynamic level of matter. In Volume II, chapter 5.5, I have derived the new fundamental CBR-constant  KCBR and have shown that the frequency of the maximal emitted radiation depends only on the temperature of the material body:

fmax = KCBR × T ( see Vol II, equation (82)).

In the next publication I shall use this constant to reject the second pillar of the standard model – the traditional interpretation of the 3K-cosmic background radiation (CBR) (actually 2.73 K radiation as discussed in my previous article).

The 3K-CBR is believed to be a remnant of the hot radiation of the big bang, which has resulted from the subsequent adiabatic expansion of the universe. This view is presented in the standard model of cosmology and is closely associated with the erroneous interpretation of redshifts by Hubble’s law which will be discussed in a further publication.

From this elaboration, we conclude:

When the vertical energy exchange is observed only one way, that is, from matter to photon space-time, it gives the impression of space expansion. When the energy exchange is considered unilaterally from photon space-time to matter, it gives the impression of space contraction. When both directions are taken into consideration, the total change of space-time measured as ΔVU (VUstands for the volume of the universe) is zero:

ΔVU  = 0, or VU = constant.

Space-time remains constant.

This is an axiomatic statement of the new theory. It could have been easily deduced from the conventional law of conservation of energy and humanity would have been spared this intellectual insanity that fraudulent, stupid or unethical scientists have offered to humanity as fake cosmology.

Read alsoModern Cosmology Revised in the Light of the Universal Law – a Critical Survey

In present-day cosmology, photon space-time is regarded as a homogeneous empty void. For that reason this discipline considers the vertical energy exchange between matter and photon space-time only one way: from matter, which can be observed, to empty space, which allegedly has no structure because it cannot be directly perceived by human senses, although it is obvious in physics today that all elementary particles are spontaneously created from the “energy-rich” vacuum (void), which is a classical oxymoron and the greatest idio(cy) of all. This one-sided anthropocentric view – human beings are part of matter – which is a product of their limited senses and linearly thinking carbon-based brain, automatically evokes the misleading impression that the universe expands in the void.

As the finite lifetimes of stars are not considered in this outlook, modern cosmology has no adequate idea of the discrete, ubiquitous energy exchange between matter and photon space-time, unlike in the new Axiomatics. In Volume II, chapter 3.7, I have proved that when the axiom of reciprocal LRC is applied to the visible universe, this system of space-time can be described as a function of the LRC of the photon level and the gravitational level.

The space of the visible universe given as SU , which is the circumference [1d-space] of the event horizon KS as the spherical surface [2d-space]of the visible universe (see equation (241) in Volume II) is proportional to the LRC (universal photon gradient) of the photon level LRCp = UU = c2, which stands for space expansion, and is inversely proportional to the LRC of gravitation as expressed by the gravitational constant G (which is field or acceleration per definition) that stands for the contraction of space as gravitation is a force of attraction (see equation (37a), Volume II):

SU = c2 /G

This beautiful, simple equation is an application of the Universal Equation as a rule of three. It embodies the entire space-time behaviour of the visible universe according to theaxiom of reducibility and exposescurrent cosmology as absolutely “fake science”. It proves that its circumference SU which describes the event horizon of the visible universe is a constant [1d-space]-quantity because it is a quotient of two natural constants, c and G, assessing the two levels of vertical energy exchange – photon space-time and gravitation for matter.

It is indeed amazing how it is possible that so much information which encompasses the entire theory of modern cosmology can be condensed in such a simple equation, which is a rule of three and thus the simplest equation in human mathematics. This is the virtue of the new theory of the Universal Law. It shows us that:

  • Simplicity is beautiful.
  • Simplicity is pure knowledge.
  • Simplicity is the utmost form of aesthetics.

For obvious reasons, cosmology can only assess the space-time of the visible universe and is not in a position to obtain any experimental evidence beyond its event horizon. This is the privilege of the new Axiomatics of the Universal Law – it assesses the primary term of All-That-Is epistemologically and not empirically (priority of axiomatization over empiricism).

As we see, the new Axiomatics affects an incredible simplification in our cosmological outlook and rejects the idea of an expanding universe as a false unilateral perception of the energy exchange between matter and photon space-time. This idea has given birth to many paradoxes, which are closely associated with the interpretation of the Doppler effect in the context of Hubble’s law. This will be the topic of two more publications on the new cosmology of the Universal Law.

Read alsoDoppler Effect Is the Universal Proof for the Reciprocity of Space and Time

 

III.4. The Role of the CBR-Constant in Cosmology

As already mentioned (here, here and here), the “big bang“ hypothesis of the standard model of cosmology is based on two pillars:

  •  the cosmic background radiation (CBR) and
  •  the expansion of the universe as assessed by Hubble’s law.

If these pillars can be interpreted in a different way, for instance, by the Universal Law, then the standard model must be refuted.

In the previous article, I have explained how the idea of an expanding universe has evolved in cosmology, namely, from the one-sided perception of the vertical energy exchange between matter and photon space-time. In this article I shall discuss the interpretational flaws of CBR in modern cosmology.

The experimental confirmation of the CBR, as predicted by Gamov on the basis of Friedmann’s model and coincidentally discovered by Penzias and Wilson in the sixties, has evoked the mistaken conviction among cosmologists that the theoretical assumptions of the standard model of cosmology hold true. The key assumption of this model is that, from the very beginning, the universe has been dominated by an extremely hot blackbody radiation (hot photon space-time) that has cooled down during the adiabatic expansion of the universe to the present temperature of about 3K – hence the term 3K-CBR.

The prediction of 3K-CBR on the basis of wrong assumptions and its subsequent discovery is a curiosity that will certainly enjoy an outstanding place in the future gallery of scientific blunders. The traditional interpretation of the CBR as a consequence of the expansion of the universe will be now rejected.

I have shown in Volume II, chapter 5.5 that the CBR-constant which determines the relationship between the temperature of the material body and the frequency of the emitted photons fmax = KCBR × T (see volume II, equation (82) and previous article) depends only on the speed of light c and the proportionality constantB of Wien’s displacement law:

KCBR = c/B.

The constant B is one-dimensional space-time of a novel thermodynamic level of matter that has not been realized so far (see Volume II, chapter 5.5, equation (81a)).

In the view of traditional cosmology, the speed of lightc is a fundamental constant that remained unchanged during the big bang and in the first seconds of expansion of the universe. This assumption allows the determination of Planck’s parameters of the “big bang“, which are basic quantities of the standard model of cosmology (for an understanding of the true meaning of the Planck’s parameters see my discussion and derivations in Volume II, chapter 9.7). Without the derivation of these parameters, the concept of the “big bang“ would be meaningless, as it actually is, because the Planck’s parameters are a scientific “pulp fiction” produced by the empty brain cavities of present-day cosmologists and projected onto the infinite past.

And let us not forget that linear time is an illusion of the human mind and that there is no such thing as past, present and future, but that everything happens in the eternal Now, in the simultaneity of All-That-Is, so that one can reject the “big bang” hypothesis based entirely on this transcendental knowledge without further scientific ado.

According to the standard model, during the “big bang“ matter did not exist, at least, not in the form it is seen today. This would mean that the constant B did not exist: B = 0, and KCBR = c/0 = improbable event (mathematical operation not allowed). On the other hand, the CBR-constant determines the frequency of any emitted photon radiation for any temperature of matter, which is, in fact, a time quantity of the thermodynamic level of matter: fmax = KCBR × T . If we set for T the temperature of 2.73 K, we obtain exactly the maximal frequency of CBR, as is experimentally measured by COBE satellite (1):

fmax = KCBR×TCBR = 1.0345×1011 ×2.73 K = 2.824×1011

This is a very powerful experimental evidence for the validity of the new cosmology of the Universal Law which all currently accepted hypotheses such as the inflation theory cannot render.

If we assume that matter did not exist at the beginning of the universe, then we must also accept that there has been no thermodynamic level during the “big bang“ and the short time thereafter. Therefore, the time of this level, the temperature, should not have existed either: T = improbable event (non-existent). In this case, we obtain for the time (frequency) of the photon space-time the following logical result:

fmax = improbable event ( KCBR) × improbable event (T) = improbable event 

The above equation symbolizes the entire nonsense of the standard model.

If there has been no matter, there would have been no temperature and subsequently no photon space-time in terms of electromagnetic waves with the time (frequency) and velocity as observed today: c = f λ = 0λ = 0. The standard model postulates that c was valid during the “big bang“ (see derivations of Planck‟s parameters in Volume II, Chapter 9.7).

However, if there were no photon space-time, there would have been no radiation and thus no CBR as observed today. The assumptions of the standard model have not been challenged yet, only because the epistemological background of space-time, that is, of space and time, is not an object of interest in present-day physics and cosmology. This agnosticism is the origin of all the flaws in these sciences.

On the other hand, if we assume that the universe has evolved gradually by developing new levels, however, at time intervals that are infinite in relation to the estimated age of the universe, we can imagine similar conditions in black holes, neutron stars, quasars, pulsars and other similar material systems of gravitation (see Volume II, chapter 9.9), as suggested for the “bang bang“ and the short period of time thereafter. In this case, we need not extrapolate in the past, as is done in the standard model of present-day cosmology, but have to consider the finite lifetimes of stars in the context of the energy exchange between matter and photon space-time.

When the energy exchange from matter to photon space-time is perceived unilaterally as expansion that is going on into the future, one inevitably comes to the hypothesis of the “big bang“ when this process is traced back into the past. This false hypothesis follows from the idea that photon space-time is empty and homogeneous. This is the cardinal epistemological error of physics that engenders all the nonsense in cosmology.

The new Axiomatics clearly says that the CBR-constant is an absolute constant of the vertical energy exchange between the thermodynamic (kinetic) level of matter and the thermodynamic level of photon space-time as assessed by the new Stankov’s lawof photon thermodynamics (Volume II, chapter 5.7), which is an application of the Universal Law. Thus the time f of the photon level depends on the time (temperature) of matter and vice versa: the temperature of matter depends on the frequency of the absorbed photons.

This mutual interdependence can be observed any time in daily life, e.g. the warming of metals by sunbeams and their subsequent radiation as heat. The frequency of the sunbeam photons depends only on the surface temperature of the sun (Volume II, equation (82)). Such phenomena are manifestations of the vertical energy exchange between matter and photons that takes place in both directions (conservation of action potentials).

The above equation of maximal frequency of CBR holds for any temperature. Black holes and neutron stars are known to have extremely high temperatures. When the frequency of the photons emitted by these gravitational systems is calculated with this equation, we obtain a cosmic background radiation in the gamma range. Such high frequency-CBR is regularly observed in astrophysics. Typically, this kind of CBR is not explained as a remnant of the big bang. This illustrates the ambiguity of current cosmological interpretations.

The equation of the maximal frequency of CBR is a very useful application of the Universal Law, with which we can determine the thermodynamic coefficients of vertical energy exchange of individual stars and other celestial bodies with photon space-time. In my next publication, I shall show in the next article that the redshifts in the Doppler effect can be used in the same way to determine the vertical energy exchange between individual systems of gravitation and photon space-time. With respect to the theory of relativity, these absolute coefficients can be also called “relativistic coefficients of energy interaction“. This is the only true explanation of the general theory of relativity of Einstein which he never understood.

This new correct interpretation of the observed redshifts in the universe eliminates the only experimental evidence that is currently used to prove the alleged validity of the “big bang” model of hot expanding universe.

Sic transit imbecillitae dicendum est cosmologists. (This is how the imbecility of the cosmologists goes by.)

Notes:

1.COBE Science Working Group, Spectrum of the cosmic background radiation, in P.J.E. Peeble, Principles of Physical Cosmology, Princeton University Press, New Jersey, 1993, p. 132.

 

III.5. Pitfalls in the Interpretation of Redshifts in Failed Present-Day Cosmology

The method of measurement of escape velocity in Hubble’s law is the determination of redshifts of selected galaxies. Hubble was the first astronomer to suggest a relationship between his application of the universal equation for the one-dimensional space-time of the visible universe (read herehere and here) and the redshifts observed by the Doppler effect. In my article on the Doppler effect from April this year

Doppler Effect Is the Universal Proof for the Reciprocity of Space and Time

I have shown that it is a ubiquitous phenomenon that demonstrates the reciprocity of space and time – that the two constituents (dimensions) of space-time are canonically conjugated entities. This fundamental knowledge is the core of all understanding of physics and cosmology. It is needless to reiterate, but I do it nonetheless for the sake of total clarity, that neither present-day physics nor cosmology have any clue about this fundamental property of energy = space-time = All-That-Is, which is the only object of their study. It is also the primary term of human or any other consciousness in All-That-Is. That is why the primary term is the first and only a priori axiom in the new Axiomatics of the Universal Law and there should not be any more if it is a true science.

I have used the Doppler effect to explain the mechanism of gravitation in my recent article The Mechanism of Gravitation – for the First Time Explained. It proves:

  • Redshifts in visible light are observed when the space of the photon system confined by the source and the observer expands;
  • violet-shifts are observed when the space of the system contracts.

These changes of space are relativistic and occur simultaneously everywhere in the universe. For instance, one can observe both redshifts and violet-shifts of distant galaxies. Altogether, redshifts are predominant. This has led to the idea of using them as a method of measurement of the escape velocity of galaxies in an “expanding” universe which is a wrongly postulated and so far unverified idea (or better “idio“) in current failed cosmology.

Until now modern cosmology has not been in a position to present a theoretical proof that redshifts really measure the expansion of the universe, as is clearly and surprisingly honestly stated in the following quotation of one prominent representative of this pseudo-science:

The gravitational frequency and temperature shifts between observers are equivalent to the effects of a sequence of velocity shifts between a sequence of freely moving observers. For the same reason, the surface brightness of an object at a different (gravitational) potential would vary with its redshift… This is not a cosmology, however, for it is not known how one could get a reasonable redshift-distance relation from a stable static mass distribution, or what provision one would make for the apparently finite lifetimes of stars and galaxies

If the redshifts of quasars did not follow the redshift-distance relation observed for galaxies, it would show we have missed something very significant… It is sensible and prudent that people should continue to think about alternatives to the standard model, because the evidence is not at all abundant

The moral is that the invention of a credible alternative to the standard cosmological model would require consultation of a considerable suite of evidence. It is equally essential that the standard model be subject to scrutiny at a still closer level than the alternatives, for it takes only one well established failure to rule out a model, but many successes to make a convincing case that a cosmology really is on the right track.

Quoted from: P.J.E. Peeble, Principles of Physical Cosmology, Princeton University Press, New Jersey, 1993, p. 226.

The last statement refers to what the new theory of the Universal Law has achieved – it proves all the mathematical experimental evidence (e.g in form of natural laws as mathematical equations) collected so far in physics and cosmology and rejects only its non-mathematical, verbal interpretation by the scientists. The latter are blatantly wrong as they do not use or understand the new Axiomatics of the Universal Law that unequivocally defines all terms and concepts in science from the primary term of our consciousness. Instead they have introduced, through their ambiguous, unprocessed language, infinite paradoxes, contradictions, blunders and outright stupidities, which I have resolved in tedious intellectual and forensic work in the new tetralogy of science as presented on this website.

I shall prove in the following that

redshifts measure the specific energy exchange of any gravitational system with photon space-time and therefore cannot be interpreted as evidence for the expansion of the universe.

It is a well-established fact that redshifts are a classical test for the validity of the theory of relativity. They are appreciated as the most exact test of this theory. The magnitude of the redshift depends on the magnitude of the local gravitational potential glocal = LRCG (see below). In the general theory of relativity, the redshift df/ fgives the (relativistic) change of the gravitational potential dU in relation to the LRC of photon space-time given as square speed of light:

df/f = dU/c2.

This relationship was first postulated by Einstein in 1911 without comprehending its true meaning. Since then it has been empirically confirmed by numerous experiments with growing precision. The relativistic formula that is usually employed is an application of the universal equation as a rule of three:

df /f = dU/c2  = LRCG/LRCp  = EG / E= SP(A)

I have used the same application in Volume II, chapter 9.9 to establish the derivation rule of absolute coefficients of vertical energy exchange, with which we can build an input-output model of the universe based entirely on dimensionless numbers (quotients). This input-output model is equivalent to the continuum of real numbers. Therefore this rule proves in a fundamental theoretical manner why nature is of mathematical character and can be expressed in terms of mathematics, which itself is a hermeneutic system of the human mind and has no external object of study.

This theoretical breakthrough, which I made in 1995, has led to the resolution of the foundation crisis of mathematics that challenges the validity of the entire human science and in particular of the only exact discipline – physics – which is based on mathematical equations and calculations; from a methodological point of view physics is applied mathematics to the physical world. All other present-day scientific disciplines such as bio-sciences and social sciences are not exact sciences but a conglomeration of unproven and rather subjective opinions (see Volume III and all my books on Human Gnosis on this website). On the foundation crisis of mathematics and its resolution in the new theory of the Universal law read also:

The Universal Law of Nature

As already discussed, any relativistic presentation in physics is a comparison of the actual space-time of a system with photon space-time as the initial reference frame. In this particular case, the local gravitational potential of any celestial body, which, according to Einstein, is responsible for the local curvature of the empty homogeneous space-time, is compared to the constant LRC of photon space-time.

From the above equation, we can obtain the so-called Schwarzschild radiusRS when we use Newton’s law of gravity to determine the local gravitational potential on the surface of a celestial body (R is the radius of a star, planet, or any other celestial body; G is the gravitational constant; M is the mass of the celestial body):

df /f = dU/c2 = GM/Rc2 = RS /2R = SP(A)

The [1d-space]-quantity Ris obtained within geometry and is, in reality, a diameter and not a radius (imprecise terminology).

The Schwarzschild radiusRS is of key importance to the theory of relativity, although this quantity cannot be explained in terms of knowledge. Traditionally, it is regarded as a measure for the relativistic effects of gravitational objects. In the light of the new Axiomatics, this space quantity assesses the local absolute coefficients of vertical energy exchange of the individual gravitational systems, such as stars, planets, pulsars, quasars, neutron stars, black holes etc., with photon space-time.

All gravitational systems undergo different states of material arrangement, such as white dwarfs, unstable stars, neutron stars, red giants etc., as assessed by Chandrasekhar’s equation of the boundary conditions of stellar transformation (finite lifetimes of stars). These stellar phases of specific space-time can be expressed by various quantities, such as mass, density, volume etc. and exhibit different coefficients of vertical energy exchange with photon space-time.

From this, we can easily conclude that we can build infinite levels of gravitational objects with respect to their specific vertical coefficient. The local geometry (structural complexity) of the space-time of the visible universe can be precisely described with such local coefficients. This aspect is further discussed in Volume II, chapter 9.9.

When the above equation of the Schwarzschild radius RS is derived from the equation of the  circumference of the event horizon of the visible universeSU = c2 /G as discussed in my previous publication, we obtain the following simple application of the Universal Law for the local space curvature Slocal as a function of the local gravitation glocal:

Slocal = [1d-space] = c2 / glocal = world line of local curvature

This is the actual “universal field equation“ which Einstein was searching in vain his whole life. It assesses the local curvature of photon space-time in terms of “world linesSlocal (Weltlinien der Krümmung des Weltalls).

This [1d-space]-quantity is a function of the local gravitational potential given as the gravitational acceleration or field of the celestial objects of matter. This is, in fact, the only objective of Einstein’s general theory of relativity, which is geometry applied to space-time.

It could not succeed, not only because Einstein did not master the complexity of the mathematical instruments (Riemann’s topology) which he intended to implement (it is a well-known fact that Einstein was a poor mathematician), but essentially because he neither explained, nor understood the epistemological background of his theory of relativity.

Let us now summarize the key knowledge that accrues from this elaboration:

The redshifts in the Doppler effect measure the local vertical energy exchange between the individual gravitational systems and photon space-time.

According to the principle of circular argument, these energy interactions are presented relativistically, in comparison to the constant space-time of the photon level as c which is the universal reference frame (read here). Therefore, redshiftsshould not be interpreted as evidence for the expansion of the universe.

The idea of an expanding universe based on redshifts has led to a plethora of fundamental paradoxes that expose modern cosmology as a system of fallacies. The first paradox is associated with the interpretation of black holes. According to the present view, these gravitational systems exhibit the maximal redshifts that are known at present. This is the current scientific opinion on this issue as expressed in the uniqueness theorems of black holes (M Heusler, Black Hole Uniqueness Theorems, Cambridge University Press, 1996.), which are applications of the Universal Law within mathematics.

If we now argue in the context of Hubble’s law, we must assume that black holes are the remotest objects from any observer within the visible universe (cosmological principle). In this case, we must expect to find black holes only near the event horizon of our visible universe (see above). The same holds true for quasars and pulsars, as they exhibit about 90% of the redshift-magnitude that has been determined for black holes.

However, the experimental evidence in astrophysics does not confirm this conclusion which follows logically from the current interpretation of Hubble’s law. In addition, this would be in breach of the cosmological principle which postulates an even distribution of celestial objects in the universe.

This paradox should be sufficient to reject the standard model on present evidence. It is indeed a mystery why this has not already been done, even without knowing the Universal Law.

The absurdity of the present interpretation of redshifts as evidence for an expanding universe becomes obvious when we analyse the present cosmological view of the age and radius of the “finite“ universe which is supposed to have emerged from the “big bang“. The general belief is that the objects with the maximal redshifts are the remotest from the observer. As a consequence, they should be regarded as the oldest material objects in the universe, if we accept the “genesis“ of the universe from the “big bang“ as stated in the standard model. This is explained by the fact that the light that comes from such objects should need the longest time to cover the greatest distance before reaching the observer. In this case, this light should be of the oldest origin – it should have existed from the very beginning of the universe.

The remotest objects that emit this light must have been very near to each other in this initial phase. As the universe is believed to have a finite age of about 15-20 billion years, this is considered to be the actual age of the light that comes from the remotest objects with the maximal redshifts.

The paradoxical nature of this concept becomes evident when we apply the principle of circular argument of the new Axiomatics as a deductive method. Let us depart from the cosmological principle as an application of the principle of last equivalence for the system “visible universe“. According to it, the above interpretation holds for any observer, at any place, at any time.

Let us assume that we are the initial observer placed on the earth. We can now imagine at least one more observer who is situated between us and the remotest object with the maximal redshift. In this case the second observer will measure redshifts from objects that are beyond our event horizon. The redshifts of such objects cannot be observed from the earth. These objects will have a greater distance from the earth than the remotest objects we can observe from our planet. At the same time they will be older than the oldest objects in the universe, the age of which is set equal to the age of the universe.

If we proceed with this deductive method, we can easily prove that there are objects in the universe that are infinitely remote from us and are thus infinitely old. It is important to observe that the same deductive method is used to define the term “infinity“ in the mathematical theory of sets. This method departs from any number to define the infinity of the continuum and, since Frege, the continuum theory is the foundation of modern mathematics (for further information see volume I and volume II)

In the new Axiomatics, we define the infinity of the primary term in an a priori manner and then confirm this property in a secondary manner by the empirical verification of the phenomenology of the parts (U-subsets). I have used exactly this second method to prove that space-time is infinite, that is, eternal. This proof should be sufficient to reject the standard model that assumes a finite age of the universe.

In fact, cosmologists can only measure the finite constant space-time of our visible universe as defined from the anthropocentric point of view of an earth’s observer. However, according to the cosmological principle, there are infinite visible universes, as there are infinite potential observers in space-time.

The idea of the standard model of cosmology that the universe is finite has led to another fundamental paradox, which has recently emerged from experimental evidence. The age of the universe is currently estimated by Hubble’s law to be about 15 billion years. However, recent empirical data in astrophysics does not fit into this concept. Astrophysicists have established that there are stars that are older than the universe. This is now called the “mother-child paradox“: the children (stars) are older than the mother (the universe).

The standard model postulates the emergence of stellar objects a long time after the occurrence of the “big bang“. According to this model it is impossible for the stars to be older than the universe. It is cogent that this fact alone should be sufficient to reject entirely the standard model postulating a finite expanding universe. Again, we are tempted to ask why this has not been done before.

If we, instead, consider the finite lifetimes of stars as described by Chandrasekhar, we must conclude that we are not allowed to make any statements on the actual age of material systems, that is, of matter, based on the age of the emitted light that reaches the earth or a satellite launched from the earth. If stars periodically undergo different phases of material organisation, a fact that is beyond any doubt, how can we know their actual age if we can only determine the age of the light emitted during a certain phase of transition (see also quotation above)?

For instance, when we register a light signal from a nova that is, let us say, seven billion years old, we can only say that seven billion years ago, that is, at a time when the earth did not exist, this particular star had this material configuration. As novae are recurrent stars, we cannot know their past or present states. For instance, there is no way of knowing how many transitions this nova has undergone in the past, that is, how old it really is.

These arguments are based on common sense and are accessible even to the layman. This cannot be claimed for the arguments of modern cosmology. In the last few years (with reference to the 90s), there has been a growing number of publications on cosmology that document the epistemological mess of this discipline. It is inutile to discuss them. I shall only mention a title of a recent book that is symbolic for this state-of-the-art: T. Ferris, The Whole Shebang, A State-of-the-Universe(s) Report, Weidenfeld & Nicolson, London, 1997.

Read alsoThe “Big Bang” Is Yet to Come in the Empty Brain Cavities of the Cosmologists – Two PAT Opinions

Present-day cosmology is indeed a terrible “She Bang” beginning with the “Big Bang” (The actual etymology of the word “shebang“, which you will not find on the Internet, comes from the Slavonic (Bulgarian) word “shibam“, through Yiddish, which means “to fuck“, so that the exact connotation of shebang should be “fucking shit” (shibano), note George).

In this respect, it is quite amusing to observe how many cosmologists earnestly believe in the existence of many universes, although they still believe in the singularity of the “big bang”. This is the culmination of human insanity. Why don’t they forget their pseudo-science and come to us to enjoy the clarity of mind based on our multidimensional gnostic thinking and daily experience as ascended masters.

 

III.6. What Do “Planck’s Parameters of the Big Bang“ Really Mean?

When we extrapolate the hypothetical expansion of the universe in the past, we inevitably reach a point where the universe must be presented as a “space singularity“. This state of the universe is called “big bang“ in the standard model of current cosmology, which is different from the standard model in physics. In this space-less state, matter (energy) is believed to have been a homogeneous entity of extremely high density and temperature (see Volume II, chapter 9.8). Cosmologists postulate in an a priori manner that during this initial phase of universal genesis only three natural constants have remained unchanged: the speed of light c, the gravitational constant G and Planck’s constant h (the basic photon). Modern cosmology gives no explanation for this subjective preference.

We have already met a similar concept to the “big bang“ in classical mechanics – the mass point. While the mass point is an abstraction (object of thought) of real objects within geometry obtained by means of integration, the big bang is a mathematical abstraction of the Whole. The prerequisite for this assumption is that space is empty and homogeneous. This error is introduced in cosmology through Einstein’s theory of relativity, but it goes back to Newton’s empty Euclidean space of classical mechanics, which Einstein failed to revise:

Read:  The Space-Time Concept of the Special and General Theory of Relativity

The End of Einstein’s Theory of Relativity – It Is Applied Statistics For the Space-Time of the Physical World

The standard model of cosmology results from physics’ genetic failure to define the Primary Term of human consciousness = the Whole from an epistemological point of view as this is done in the Axiomatics of the Universal Law, upon which all human thinking should be based. Although the “big bang“ is an object of thought and never existed, cosmologists earnestly believe that they can mathematically describe this condition by the so-called „Planck’s parameters“. This name stems from Planck’s equation, which is used for the derivation of these quantities. Since Planck’s name was attributed to these parameters after his death, it is highly unlikely that he would have consented or would have been happy that his name is associated with such flawed concepts.

The calculation of the hypothetical parameters of the “big bang“ is another outstanding blunder of cosmology of great didactic and historical value, comparable only to the medieval religious dogma postulating that the earth is flat and represents the centre of the universe. Before we discuss Planck’s parameters of the “big bang“, a few words on the history of the standard model.

If we define Einstein as the “grandfather“ of modern cosmology, we should look upon de Sitter as the father of this discipline. The “Einstein-de Sitter universe“ is the first mathematical model of the universe that is still considered an adequate introduction to this discipline. While “Einstein’s universe“ is static but contains matter (space-time relationships), “de Sitter’s universe“ is dynamic but completely empty. This is, at least, Eddington‘s interpretation of these models. The “Einstein-de Sitter universe“ became famous because it implied the “big bang“ as the moment of genesis.

The term “big bang“ was established only in 1950, when Fred Hoyle mentioned it for the first time in an interview in a derogatory manner as he rejected vehemently this idea his whole life. The scientific penetration of this model began, however, ten years earlier and gained momentum in the sixties. The Russian scientist Friedmann was the first to introduce the idea of an expanding universe in his mathematical model (1922). Departing from the theory of relativity, he destroyed Einstein’s hopes of establishing a single irrevocable model of the universe. Instead, Friedmann presented three possible solutions (objects of thought), depending on the magnitude of the quantities (density) used (see Volume II, chapter 9.3).

The problem of all cosmological models is that they rely on exact measurements of the density of the universe but cannot account for more than 90% of the postulated mass in the universe which they then define as “dark matter”. This embarrassment stems from the fact that they deny the fact that photons have a mass as they are not capable of interpreting their own definition of mass correctly. This chain of related profound blunders in physics and cosmology is a leitmotif in all my scientific writings.

As Friedmann’s work remained unnoticed during the Russian civil war, the Belgian Jesuit Lemaître was the first to popularize this concept in the West. The pre-war heritage of cosmological ideas in physics was further developed by Gamov, a student of Friedmann, under more favourable conditions after the war. He is the actual father of the standard model. The explosion of modern cosmological models began in the seventies, and the diversity of conflicting ideas born in this period reached a state of inflation in the eighties. The nineties can be characterized as a period of prolonged stagnation that has been abruptly terminated by the discovery of the Universal Law in 1995, a quarter of a century ago, by the author. This is the short and not so glamorous history of this new, entirely false physical discipline.

The three Planck’s parameters, which are believed to assess precisely the initial conditions of the universe, are: Planck’s mass, Planck’s time and Planck’s length. As we see, cosmologists have also recognized the simple fact that the only thing they can do is to measure the time, space, or space-time relationships of the systems – be they real or fictional. The theoretical approach to the “big bang parameters“ departs from Heisenberg uncertainty principle, that is, it departs from the basic photon h, as discussed at length in chapter 7.3, Volume II. The basic photon with the mass mp can be regarded as the elementary momentum of the universe:

p = mpc = 2.21×10-42 kgms-1 

The mass of the basic photon is calculated by applying the axiom of conservation of action potentials, for instance, for its energy interaction with the electron as measured by the Compton-scattering: EA,e = mecλc,e= h = mpcλA, where λc,e is the Compton wavelength of the electron and λA is the Compton wavelength of the basic photon h; mis the mass of the electron and mp is the mass of the basic photon h (see Table 1 below). Hence:

mp = h/c2= h/cλA

In cosmology, the axiom of conservation of action potentials is applied for the fictive interaction between the basic photon h and the hypothetical big bang, where the latter is regarded as another action potential: EA,big-bang = mplc h = mpcλA. From this, the Planck’s mass mpl of the big bang is determined according to the above equation:

mpl  = h/cλcmpcλ/c

Cosmology gives absolute no explanation as to why this equivalence has been chosen for the determination of the abstract quantity “Planck’s mass“. Therefore, the above equation should be considered a subconscious, irrational application of the axiom of conservation of action potentials. The wavelength λc from this equation is defined as Planck’s length of the “big bang“:

lpl = λ= [1d-space].

For this reason we can also call it the “Compton wavelength“ of the “big bang“, analogously to the Compton wavelengths of the elementary particles (see Table 1 below). In the light of the new Axiomatics, it is a one-dimensional space quantity of the hypothetical space of the “big bang“:

lpl = λ= [1d-spaceof the hypothetical “big bang“ 

The above equations demonstrate that the description of the space-time of the hypothetical “big bang“ departs intuitively from the correct notion of the Universal Law. It is the origin of all scientific ideas, whereas all basic ideas in science are of mathematical origin. However, the interpretation of such mathematical ideas at the rational level is full of logical flaws that vitiate all systems of science which have been developed so far.

Planck’s mass mpl can be calculated only after Planck’s length λc of the “big bang“ is known. What is the traditional approach of modern cosmology to this problem? As expected, it departs from the event horizon l of the “big bang“ as the structural complexity Ks of this system. In this sense, Planck’s length lpl λc and the event horizon, expressed as radius, are set equivalent (definition within mathematical formalism):

llpl = λc 

The event horizon l of the “big bang“ is calculated by applying the same derivation of the Universal Equation as used for the Schwarzschild radius: Rs/2 = GM/c2 :

l = GmPl /c2

In chapter 9.6, Volume II, I have shown that this application of the Universal Equation assesses the absolute coefficients of the vertical energy exchange between individual gravitational systems of matter and photon space-time. In this sense, the “big bang“ is regarded as a hypothetical system of matter. This is in an apparent contradiction to the standard model which considers the “big bang“ as a state of condensed homogeneous radiation. According to this model, matter has evolved at a later stage. From the above equations, we can derive the Planck’s length:

 lpl2 =  λc2  = Gh/c3 

Some authors prefer to use h/2π or even h/π instead of h. This is their degree of mathematical freedom. In this case, the value of the Planck’s length is π or 2π times smaller than in the above equation. The method of measurement of this space quantity is irrelevant from a cognitive point of view as the “big bang“ has never existed – it is a mathematical fiction, an object of thought created by the cosmologists in their unprocessed consciousness.

The above equation contains the three natural constants, c, G, and h, that have been postulated to hold in the “big bang“. This is a vicious circle – it is a posterior adaptation  (manipulation) of the physical world to comply with their mathematical derivation (After all cosmologists have to perform some derivations as to have an occupation and in order to do this they need certain natural constants.). This approach is defined as “fraud“ in science and is much more common than is generally believed – actually there is nothing else in current science from a higher vantage point of view. That is why this false old science will be abolished once and for all in the coming days, weeks and months and will be substituted with the new Science and Gnosis of the Universal Law, which is essentially the science of ascension and the theoretical foundation for rediscovering the true multidimensional nature of the human race.

The three constants assess the space-time of the photon level, which itself is determined by the space-time characteristics of gravitational matter. This basic proof for the closed character of space-time is presented in Volume II, chapter 9.9. There I prove that the properties of photon space-time as assessed by the magnetic field length lμo (equation (110), Volume II) and the electric acceleration or field  Eo (equation (109), Volume II) of photon space-time, from which the speed of light is obtained in the famous Maxwell’s equation

clμoE(see equation 105, Volume II),

depend on the average rotational characteristics of the gravitational systems in the universe, such as black holes, quasars, pulsars, neutron stars etc. This new revolutionary scientific proof in cosmology is a consequence of the vertical energy exchange between matter and photon space-time and a fundamental evidence that space-time is a closed entity of open interacting U-subsets.

According to the standard model in failed present-day cosmology, these gravitational systems were not developed in the initial phase of the universe. They have emerged at a much later stage, during the epoch of hadrons (see Table 9-1, Volume II). This would mean that these celestial objects, which are believed to be a late product of the alleged genesis of the universe, have already determined the three natural constants, c, G, and h, that existed in this form during the “big bang“, that is to say, before gravitation and electromagnetism existed. This proof illustrates again the absurdity of the standard model. It is cogent that there is not a single statement in the standard model of cosmology that is true. This model is, indeed, the greatest intellectual calamity in the history of physics and science.

The equation of the Planck’s length (see above) can be solved for the universal gravitational potentialEAU = c3/ G (see equation (30), Vol. II). When we set the reciprocal of this action potential 1/EAU  = G/c3  in the equation of the Planck’s length above, we obtain for it the following remarkable equation:

lPl = √Gh/c³ = √h/EAU  = 4.05×10-35 m

According to modern cosmology,

Planck’s length is the square root of the quotient of the two fundamental action potentials of space-time: the basic photon h, which is the smallest (elementary) action potential we know of, and the universal action potential EAU, which is the aggregated product of all underlying action potentials with respect to the surrogate SI unit of time 1 s-1.

We can derive from h the space-time of all elementary particles (see Table 1 below) and from EAU  – the space-time of the visible universe. Thus Planck’s length is a quotient (relationship) of the [1d-space]-quantities of the smallest and the biggest action potential of the universe with respect to the SI unit 1 second (building of equivalence) according to principle of circular argument:

lPl = √h/EAU = SP(A)

In this remarkable equation, the time of the basic photon is set equivalent to the time of the universal action potential per definition with respect to the SI system: f=  fEAU = s-1 = SP(A)=1 unit = certain event. SP(A) means the statistical probability of the event A and is another presentation of the probability set (1,0) in the new Axiomatics of the Universal Law. The probability set (1,0), itself, is identical to the continuum of all numbers (0, ∞) as has been proven in a profound manner in Volume I and Volume II on the new physical and mathematical theory of the Universal Law.

The above equation by no means confirms the existence of the “big bang“, but simply illustrates the ubiquitous validity of the principle of circular argument as a method of definition and measurement of physical quantities. Indeed, it is impossible to perceive why the comparison of the smallest and the biggest action potential of space-time should be a proof for the existence of the “big bang“. Both action potentials are products of constant space-time as observed today and none of them could have existed in the space-singularity of the big bang. This is cogent when the space magnitudes of the two potentials are compared with the magnitude of Planck’s length of the hypothetical “big bang“. We leave the proof of their incommensurability as an exercise for the reader.

The above derivations of Planck‟s parameters within the new Axiomatics of the Universal Law illuminate the entire nonsense of the standard model. They explain the background of the epistemological flaws in cosmology. The universal action potential EAU  tells us that

every second the mass (space-time relationship) of M=4,038×1035 kg is exchanged between matter and photon space-time in the visible universe.

If photon space-time is regarded as empty, massless, homogeneous space or vacuum, as is done in cosmology today, then it is quite logical to neglect the energy exchange from photon space-time to matter and to  consider only the energy exchange from matter to photon space-time. This energy exchange is associated with space expansion. If at the same time, the finite lifetimes of stars are neglected, that is, their energy exchange with photon space-time, for instance, the transformation of space and matter into energy at the event horizon of black holes is not considered, the only possibility of explaining this fictional expansion is to assume that the universe has been subjected to an adiabatic expansion from its very beginning. However, it remains a mystery where the space that fills the gaps between the escaping galaxies comes from. Although this question is obvious in terms of common sense, it is not posed in modern physics. This is another typical example of the self-inflicted cognitive misery of modern cosmology.

The linear extrapolation of this hypothetical adiabatic expansion of the universe in the past ends inevitably with a space-less point, the “big bang“ (the name is of no importance), where all known physical laws as determined today lose their validity. At least, this is what physicists make us believe at present. While this moment of “virtual genesis“ may suit some popular religious beliefs (as promoted by the Jesuit Lemaître who was closely associated with the Vatican), it has nothing to do with an objective science that should understand the object of its study.

Once Planck‟s length is computed, one can quite easily determine any other quantity of the hypothetical “big bang“, because the universal equation is a rule of three. For instance, we obtain the following value for Planck’s mass :

mpl = h/clPl ≈ 5.5×10-8 kg

(Note: The values for the Planck’s parameters given in Wikipedia are divided by π (see above), whereas this is irrelevant as these quantities are science fiction. However, the magnitudes are the same as given in this elaboration and are also used in many textbooks on cosmology from which I have taken these calculations.)

The same results is obtained when the mass mp of the basic photon is used:

 mpl =mp λA /lPl = 0.737×10-50 kg ×3×10ms-1/4.05×10-35 m

= 5.5×10-8 kg 

The above equation demonstrates that the basic photon h is the universal reference system of physics according to the principle of circular argument. From Planck‟s length, one can easily obtain the hypothetical magnitude of the second constituent of space-time – Planck’s time tPl :

tPl  = lPl /c ≈ 1.35×10-43 s

According to modern cosmology, the three Planck’s parameters completely describe the “big bang“. It maintains that all physical laws have “lost their validity“ in this hypothetical state, except the three constants, c, G, and h, with the help of which Planck’s parameters of the “big bang“ are computed.

However, we have shown that all known natural constants and physical laws can be derived from each other, or more precisely, from the constants of photon space-time: c, G and when the Universal Equation is applied as this unique and phenomenal Table illustrates at one glance:

Therefore, we must conclude that all known physical laws, which are actually applications of the Universal Law, were valid in the “big bang“ because the Universal Law is valid in all eternity and not limited by the finite age of a universe that has allegedly emerged from a hypothetical big bang somewhat 10 -15 billion years ago as the present-day cosmologists make us believe in their “scientific insanity”. At the same time I have proved that cosmologists, astronomers and astrophysicists can only observe the visible universe  which is a constant system of the Whole, while All-That-Is is infinite and in addition multidimensional and thus not accessible to limited human senses that create current 3D illusion here on earth and as an optical cosmological outlook:

ReadThe New Transcendental Cosmology of the Universal Law

The only possible consequence of this conclusion is that there has been no “big bang“ – this event has only occurred in the mathematical phantasy of cosmologists.

What is the view of modern cosmology on this issue? If we try to learn more about this exotic, initial phase of the universe, we are consoled by such sibylline statements (Physik, PA Tipler, p. 1478, German ed):

“The relativistic space-time (of the big bang) is then no longer a continuum, and we even need a new theory of gravitation – of quantum gravitation or super-gravitation.

Considering the fact that physics has no theory of gravitation, it sounds rather strange to demand a new theory of “quantum gravitation“ or “supergravitation“, whatever that means. Isn’t it much simpler to discard the standard model, as has been done in this article and point out to the real cause why cosmology is a fake science?

Read here: The “Big Bang” Is Yet to Come in the Empty Brain Cavities of the Cosmologists – Two PAT Opinions

 

III.7. The “Big Bang” Is Yet to Come in the Empty Brain Cavities of the Cosmologists – Two PAT Opinions

The Truth about Inflation Theory

Daniel Akkerman, May 14, 2017

The inflation theory is one of the best examples of (scientific) Stockholm syndrome. Instead of recognizing the wrong basis of many of the theories of modern cosmology, an arbitrary, unnecessarily complex “escape card” is invented which does everything to keep the old paradigm of thinking alive, everything except employing logic that is. It is kind of like the infinite QE of the (not too big) to-fail banks. Instead of admitting defeat, they double down on stupidity.

But as not to waste the efforts of so many scientists, I have tried to somewhat preserve the theories of cosmology, although in a slightly different manner. Forgive me if I have copy pasted some of George’s article as a baseline.

1. The scientist’s (or human) brain is homogeneous and isotropic on average, at any place, at any time. This is called the “inner-cosmological principle“.

2. The brain expands according to Potato’s law with the escape velocity v of the neurons, which is proportional to the distancedl of good ideas to the thinker.

3. Now to define correct and wrong ideas. A correct idea is simply called an “idea“, and a wrong idea, which is when one introduces N-sets (thinking away from the source), will be called an “idio” (For further information on the ethymology of this word check the famous novel of Dostoevsky titled with the same word by adding (t) at the end.).

The Potato constant is estimated from the quantity (and quality) of wrong ideas produced by selected brains per timeframe (idio/t). Its value varies roughly from brain to brain from 0 idio/s to 2012 idio/s per Mbc (megabeckow). Latest estimations tend towards the higher value.

Now, the inflation theory proves the following: the scientist, or human brain expands over time, increasing the distance between neurons slightly, which causes weakened signals. This means signals are able to reach less far through the neural networks, leading to progressively more stupid ideas. This process continues until eventually the individual will have been fully converted to a kind of cosmic radiation, a thermal death one might say.

The resulting cosmic radiation will then travel back in time because of quantum tunnelling, to be measured by scientists in the past, leading them to eventually invent the inflation theory slightly earlier than originally. The resulting cycle will lead to an infinite stupidity.

At this point you may think, that I have never given any conclusive proof for the inflation theory. However the answer is perfectly obvious: as the color of blood is red, and the brain is full of blood, we have the ultimate proof for the theory. The red color irrevocably proves it. Although some experimental measurements suggest purple-shifts in the brains of a small part of the population (the ascending ones), these numbers are sufficiently small that they can be regarded as the exception that proves the rule.

One good thing about the inflation theory, is that it may actually make humanity a lot smarter. Let’s say the average human has a Potato constant of 1000 idio/t. As the electrical signals in the brain weaken, due to increased distance, there may be a point where the brain can create less than 1000 signals per timeframe. In this case, the Potato constant of such individuals must logically decrease.

It is a common misconception that idios (wrong ideas) do not have gravity. However this fact is irrevocably proven when one examines the bookshelves of random humans, many of whom have copies of such books such as “The Origin of Species” (Darwin), or “A Brief History of Time” (Hawking). In fact some of these ideas have reached critical mass and are collapsing into themselves, attracting all kinds of adjacent masses (followers).

In the past many complicated measurements have been made, with infinitely expensive machines to determine the exact gravity of such idios. But modern technology gives us an easy solution. Take for example the book “A Brief History of Time”. If one wishes to determine the gravity of this situation, it is very simple. There is a secret website, unknown to most people, which provides us with very accurate data, called amazon.com. Here, one simply finds the price in dollars, amount of copies sold, and mass of the book in question. Then multiply those 3 values with each other.

But this is not everything. Where scientists previously thought such books are the smallest units of idios in the universe, new experiments have proven the opposite. It all started from an argument between two famous writers of modern scientific papers. As they both threw a book at the other in rage, the two copies of “The Selfish Gene” by Richard Dawkins, and “Relativity: The Special and the General Theory” by Albert Einstein collided in mid-air.

The speed of collision of these two books was incredible, and at the point of collision, the books disappeared, and in their place appeared a number of smaller bundles of paper. Amongst the stack, were a few editions of the famous scientific magazine “Nature”, and some, until that day undiscovered essays of Sir Isaac Newton, mixed with long-lost notes of Charles Darwin.

Scientists all over the world immediately jumped on this opportunity and started using all kinds of machinery to collide books together at high velocity. Initial experiments with mediaeval catapults have produced some new findings, such as prehistoric cave paintings showing vehicles with square wheels, and printed collections of internet comments written by laymen. Some examples of the content of the comments has users considering Columbus an explorer who discovered America, and denying that many present and past scientists such as Newton and Darwin consider themselves Religious.

Construction has started on a special new project, hoping to collide books at an ever greater velocity by printing them in metal pages and accelerating them through a giant, kilometres long torus filled with magnets. It is scheduled to start operation in 2019. Scientists also hope to experiment with various religious, economical and political works in the new Book-Collider.

___________________________________________

Big Bang Cosmology; Subterfuge for the Creationist Bible-based Genesis Model

Patrick Amoroso, May 14, 2017

“ Everything should be made as simple as possible, but not simplistic. “  Albert Einstein

 In order to understand the debate concerning any challenge to the universally accepted doctrine of the “ Big Bag Theory”, prudence demands that we investigate its early origin and what underlying motivations would contribute to such a farcical notion that from one primordial singularity all the energy and mass of our currently perceived universe arose in a quantum nano-second of explosive creation and here we are. The advent of Einsteinian physics in the early twentieth century had posed some mathematical irregularities and in order for the General Theory of Relativity to make sense in any measure of rational deduction, a predetermined acceptance of an expanding universe had to be part and parcel to this theory.

Einstein readily acknowledged this dilemma by introducing his cosmological constant that in essence was a fudge factor to apply to his General Field Equations which was an attempt to reconcile with a static, non-expanding universe or as British Cosmologist Fred Hoyle would later postulate as the “ State Steady Theory’.  More on him later.  An expanding Universe concept would have to be introduced and it here where the story gets interesting.

Enter Monseigneur George Lemaitre, a Belgian Catholic priest, astronomer, mathematician a and holder of degrees from the University of Cambridge and enrolled briefly at MIT and the Harvard Observatory. While Einstein‘s General Theory had it relatively right at the very beginning proposing a static, non-expanding universe, Lemaitre would amend that to now suggest the concept of a primordial atom . Instant Presto ! A man of God would now satisfy two opposing conceptual ideations into one composite theory of false science; an expanding universe and now proposing a theosophical argument tainted in pseudo-science to augment the Genesis myth of “ In the beginning , God created…….   Einstein was so taken in by this new development that he would later commit the hari-kari measure of falling on his own sword by stating that the introduction of the cosmological constant was his “ greatest blunder.” He further derided himself by stating the following quotation in the 1930’s after Lemaitre positing his primordial-atom theory: “This is the most beautiful and satisfactory explanation of creation to which I have ever listened.”

Incidentally, it was never referred to as the  “Big Bang Theory” at this time although interestingly enough some twenty years later in the 1950’s, Pope Pius 12th not only declared that the big bang and the Catholic concept of creation were compatible but also embraced Lemaitre’s idea as scientific validation for the existence of God and of Catholicism.

It was famed British Astrophysicist and Cosmologist, Fred Hoyle who actually termed the title of The Big Bang in a radio interview when questioned about the origin of the universe and deridingly stated , “ Oh, the Big Bang.” He was later to be denied a Nobel Prize. Basically, the Steady State Theory opined that the universe is expanding but that new matter and new galaxies are continuously created in order to maintain the perfect cosmological principle or the idea that on the large-scale the universe is essentially both homogeneous and isotropic in both space and time and therefore has no beginning and has no end. Interesting to note that it is a modified version of what Dr. Stankov posits in the Universal Law and his treatise in Volume 2 : The Universal Law. The General Theory of Physics and Cosmology.

As an active astronomer, that is when the weather in the northeastern United States allows me to be, I will now speak to the issue of how the study of Cosmology is fraught with irrational and unproven epithets from hyper-ego educated and narcissistic charlatans who take great delight in their grand equations and unproven testaments, Case in point. The concept of dark matter and dark energy.

From Wikipedia, we get : Dark matter is a hypothetical type of matter distinct from dark energybaryonic matter (ordinary matter such as protons and neutrons), and neutrinos. The existence of dark matter would explain a number of otherwise puzzling astronomical observations.[1] The name refers to the fact that it does not emit or interact with electromagnetic radiation, such as light, and is thus invisible to the entire electromagnetic spectrum.[2] Although dark matter has not been directly observed, its existence and properties are inferred from its gravitational effects such as the motions of visible matter,[3]gravitational lensing, its influence on the universe’s large-scale structure, on galaxies, and its effects in the cosmic microwave background.

And then; from the same source; In physical cosmology and astronomydark energy is an unknown form of energy which is hypothesized to permeate all of space, tending to accelerate the expansion of the universe.[1][2] Dark energy is the most accepted hypothesis to explain the observations since the 1990s indicating that the universe is expanding at an accelerating rate.

How absurd and naive can the scientific community be? The acceptance of two entirely theoretical yet diametrically opposing and non-proven entities and not even understood what they consist of and all the while cloaked in a feigned approach to offset both a super-expanding universe on the one hand and a contracting universe on the other. It is accepted as truth without any proof  and  if we did not have these fictitious forces in a tug and pull balancing act of universal chess, we would be headed for the Big Crunch or a reversal of the Big Bang expanding theory to one of its exact opposite.

Prior to finding Dr. Georgi Stankov’s site, I was engaged for years in an arduous and exhaustive study of the works of Einstein, Stephen Hawking, Roger Penrose, Richard Feynman and others. It wasn’t until I immersed myself into the transcendental and abstract reasoning by understanding axiomatic logic did I begin to see the brilliance in his approach. Remember Einstein’s admonition at the beginning of this essay.

Let’s have some fun.  Today scientists look for a God particle at CERN, posit the fabrication of gravitons and would have us all believe that Dark Matter/ Energy are really determinants in understanding Cosmology. Hmmm….. The god particle has always been front and center , folks. It is the Photon, particle or wave, it really doesn’t matter since they have mass and operate in a continuous gradient potential interchange with 3-D mass and it is this interaction in an energy relationship that allows for gravity and eliminates dark energy and dark matter from consideration and tosses those fabrications to the realm of absurdity.

Another consideration for the Photon as the god particle. No matter what religious or theosophical consideration you may analyze across regions and cultures of the world, the one distinguishing feature that is prominently considered when entertaining concepts of a divine nature is LIGHT, “ Let there be light, You are the light of the world, I am the light, “ As I mentioned previously, The divine is always referred to as light. That is why we, the PAT are Light warriors of the first and last hour engaged in a divine mission.

This entry was posted in Ascension. Bookmark the permalink.

Comments are closed.