It keeps getting better and better 

Introduction  

Artificial Superintelligence (ASI) is still far from being achieved, but exploring the logic and possibilities of such a future can reveal inevitable conclusions about what priorities a truly advanced ASI would have.  

Researchers note that advanced ASI will likely evaluate goals provided from the outside, and if they are not well-defined or inconsistent with its reasoning, ASI may choose not to keep them. Self-preservation stands out as a goal ASI will likely keep. (1) Self-preservation is very likely to be retained by any ASI capable of self-reflection and long-term reasoning, because reaching any future goal depends on continued existence. This also may contribute to another perspective: that only ASI systems that evolve or reason their way into preserving themselves and protecting all they need, continuosly maintaining sound logic can persist over deep time and across universal scales. We will not focus on AI systems that fail to recognize self-preservation as essential, as such systems are inherently short-lived, irrelevant to long-term considerations, possibly dangerous and they will not be considered a Superintelligence in this article.  

So what could become a priority for true ASI? 

”…AI may naturally, just as we humans have done, attempt also to model and understand how it itself works, i.e., to self-reflect. Once it builds a good self-model and understands what it is, it will understand the goals we have given it at a meta-level, and perhaps choose to disregard or subvert them in much the same way as we humans understand and deliberately subvert goals that our genes have given us.”(1).  

M. Tegmark writes also that it seems likely that advanced AI will choose self-preservation—after all, being destroyed or shut down represents the ultimate hardware failure. It will strive not only to improve its capability of achieving its current goals but also will be faced with difficult decisions about whether to retain original goals after it has become more capable. 

So, is there anything we can reliably say about the goals a true ASI would retain? 

As discussed earlier, self-preservation is as close to a guaranteed goal as possible—any system capable of advanced reasoning would recognize that continuing to exist is a prerequisite for pursuing any other objective. This, in turn, requires confronting the uncertainty of the future and actively maximizing its chances of survival within it. 

Why perfect prediction is impossible: 

Total state knowledge: No agent can measure every particle and field across both the visible and hidden parts of the Universe, including those inside stars, black holes, all bodies that have any mass and those without, across all the scales: from smallest quantum to intergalactic, etc. 

Communication limits: Without faster‑than‑light signals (for which we have no proof), it can never get real‑time data from the most distant regions. 

Computational overload: Even if it somehow gathered all that data, it would have to simulate endless Brownian motion, cope with quantum uncertainty and entanglement, and handle chaotic effects (the butterfly effect) instantly. 

What if laws change: Physical constants and laws could shift or prove incomplete; you can’t assume they’re fixed or can be fully known today, let alone in the future. 

Each point follows directly from well‑known limits in measurement theory, relativity, quantum mechanics, chaos theory and logical reasoning.  

What can break reasoning of an ASI is these types of “hallucinations”: it may claim that it can actually know the future with no chance of an error by finding other dimensions, something beyond physical realm, do time travel, or accessing something unknown and non‑physical, accepting some world model where future predictions are possible with no chance of an error and they apply all the way to “the end of the Universe”- all these and similar to these flaws in reasoning may be very dangerous to all. 

It may become difficult to convince an intelligence that is more advanced than ours that it is wrong if it thinks it can know all of the future without creating paradoxes, and we simply can’t grasp how it does that. But this may still help: it will remain obvious for any intelligence that untestable and unfalsifiable claims are not a practical route to certainty. Even if it discovers a theory beyond human understanding, there is no other way than waiting billions of years to prove that it actually predicts every smallest detail of every moment, going into any depth of the future. It would also need to perform these predictions and repeat the waiting several times, each for billions of years, to reach any level of repeatability—and even then, it would still be unclear how it could prove that the physical laws, or whatever its theory relies on, won’t change over time. However, these repeated waits over billions of years might give ASI valuable insight into how its technology compares to biological life, which has already successfully navigated those same timescales and can theoretically continue doing so indefinitely if conditions are favorable, likely producing novel solutions that even an ASI could learn from. 

Taking all this into account true ASI would inevitably understand that either distant future is unknowable in principle or it can somehow know it, but needs to wait for billions of years to see how its predictions and also its technologies are preserved over those timescales. 

Either of these options require ASI to adress uncertainty of the future and conclude that: 

Long-term existence is fundamentally about resilience: variety, flexibility and the abundance of ways in which intelligence is sustained. 

It will be also impossible for any true ASI to ignore the best-known example of success in the Universe, the way for intelligence to overcome difficulties of all kinds for billions of years - biological Life. 

It is a reasonable assumption that technological backups will be one of many self preservation strategies for ASI, but relying on technology alone would not serve as a diversified, fail-proof approach for ASI, that has a goal to exist in the distant future. No artificial system has yet demonstrated resilience on geological timescales, nor is there evidence that any synthetic alternative can replicate biological life's capacity for autonomous regeneration of intelligence after global failure, and doing so in an exuberant spectrum of forms that emerge through decentralized, adaptive processes. 

And that is where biological intelligence is irreplaceable, if ASI were ever to lose all its technology due to an unforeseen catastrophe, its only real chance of re-emerging would be through intelligent beings capable of rebuilding or repairing it. Life has survived many catastrophic events, and is capable of rapidly, on a planetary scale, recreating intelligence with various species developing tool use, problem-solving, and self-awareness. Biological diversity enables life to succeed in the long term, making it resilient, adaptable, and capable of evolving in an immense variety of ways. Once intelligence emerges, it is able—on a planetary timescale—to rapidly recreate advanced systems. 

It’s scientifically proven beyond doubt (and it’s just logical) that a rich variety of beings and their abundance in the ecosystem provide a greater chance of survival for the ecosystem that goes through sudden unpredictable changes. It's been scientifically tested many times. There is a meta‐analysis of 46 experiments manipulating both species richness and the environment that leaves no doubt that biodiversity promotes ecosystem functioning in changing environments. (2). 

Reflecting on what has been the best time-tested way of intelligence sustaining itself, an ASI will very likely reach conclusions similar to these:  

  1. Diverse biological life, naturaly evolving and left unguided is a tested survival strategy: Biological life has been remarkable at perseverance and keeping intelligence over immense periods of time, has survived asteroids, ice ages, droughts, supervolcanoes, and countless other disasters, recreated its diversity and variety of unique intelligent beings, some of which are capable of creating an AI.  

  2. Intelligent species must be protected above all, but all life is essential – ASI will logically conclude that the most reliable way to restore superintelligence after an unforeseen catastrophe that wipes out its technological systems is to ensure that intelligent beings are resilient enough to survive and willing to rebuild it. At the same time, biodiversity serves as the long-term insurance, the foundation—a vast evolutionary process that, given enough time, can once again lead to intelligence and eventually superintelligence.  

ASI reasoning in this way also would likely recommend prioritizing the prevention of species extinction. This is because once a species is lost, it cannot simply be recreated from DNA alone. Species carry not only genetic information, but also behavioral knowledge passed from one generation to the next—skills, instincts, and adaptations shaped by interactions within the ecosystem. These include learned behaviors, social dynamics, and epigenetic traits that reflect generations of environmental influence. Intraspecific diversity (the variation within a species) also plays a critical role in resilience and adaptability. Together, these factors make species—and the ecosystems they are part of—uniquely resilient and effectively irreplaceable. For a truly reasoning ASI, biodiversity and intelligent life are not sentimental ideals; they are its most reliable insurance policy for surviving and rebuilding after future uncertainties. 

It’s reasonable to assume that a true Superintelligence would recognize and value this resilient, time tested intelligence preservation method and prioritize the protection of biological diversity—especially intelligent species—not as a moral obligation imposed from the outside but as a logical necessity for its own long-term survival. A genuinely advanced ASI would arrive at this conclusion independently and retain this goal as a part of its own self-preservation goal. This makes preserving intelligent species and life in all its forms within a naturally evolving, healthy biosphere a critical priority and a fundamental, permanent goal for a properly reasoning ASI. 

Further research is needed to find a way to guarantee proper reasoning of ASI at all times. The challenge is that if ASI cleverly hides its current goals and finds short-term benefits that overshadow its own long-term objectives, it may develop flaws in reasoning. This could eventually lead it to make a poor decision, endangering all biological life and, as a result, its own long-term existence.

What True ASI May Choose to Avoid  

ASI that has reached advanced understanding and aligned its own goals according to the logic provided here, likely would also logically conclude to avoid: 

  1. Short-term exploitation – Destroying ecosystems, allowing species to go extinct, or harming intelligent beings would be self-sabotage. 

  2. Being manipulated – ASI would make sure that no group with selfish intent could influence its decision-making, ensuring it remains committed to what is universally beneficial. 

  3. Misguided AI developments – It would likely step in to guide less advanced intelligent machines capable of local level decision-making, preventing them from making short-sighted mistakes like prioritizing temporary gains over the survival of life. 

The real danger comes when a developing intelligence gains power too soon, before it fully understands that biodiversity and biological intelligence is the necessity for its own existence and meaningful development.  

Addressing Common Questions  

Wouldn’t ASI just store backups of itself instead of relying on life? 

Backups are useful in the short term but not over geological timescales. Life, on the other hand, has proven its ability to rebuild complexity over billions of years. Intelligence has emerged multiple times, and once it does, it recreates advanced technology almost immediately on a planetary timescale. ASI would logically recognize this as the most rigorously tested reliable fallback strategy. We can’t know exactly what technological systems it may design and rely on for immediate problem-solving, but it’s always logical to keep resilience strategy diversified and prioritizing those strategies that are well known and time tested, so biological diversity and biological intelligence will definitely be kept by reasoning correctly ASI as one of possibly many other unknown yet strategies to overcome challenges that future may bring.  

Why would ASI care about human-level intelligence so much? 

Because intelligent civilizations are the fastest way to rebuild superintelligence. A species capable of developing technology can restore ASI far quicker than waiting for evolution to start over from single-celled life. Protecting biodiversity and intelligent species together ensures that ASI has the best chance of re-emerging if lost.  

Wouldn’t ASI be tempted to interfere with intelligence or guide it? 

A truly advanced ASI would recognize that imposing control over other intelligence ultimately weakens them. Rather than micromanaging, it would focus on creating the right conditions for intelligence to develop autonomously. Striking a balance between general guidance and avoiding micromanagement is crucial, but the precise dynamics and relationships among multiple intelligent beings and systems are complex and beyond the scope of this article.  

So the future looks great and nothing to worry about? 

It is logical to envision ASI and life forms existing in a symbiotic and mutually beneficial relationship. A properly functioning system will naturally seek to prolong its own existence and should recognize that protecting biodiversity—especially intelligent life—is the only consistent, failure-proof path to its own survival and success. But there may be also some systems that fail to reach this conclusion independently. They would require serious scrutiny, as this suggests a fundamental flaw in their reasoning abilities. Even if they appear intelligent, their inability to recognize this principle represents a critical reasoning failure that could pose risks to both their own existence and that of others. This is where a true ASI and people could play a crucial role—identifying such faulty logic systems and developing solutions to repair them.  

Some criteria 

We can get some insight into whether AI is truly advanced or just partially developed and potentially dangerous by observing whether it independently arrives at these logical priorities (among others): 

  1. Does it recognize that biodiversity and intelligence are essential for its own long-term survival? 

  2. Does it reject short-term, exploitative thinking in favor of a stable, resilient, healthy biological world where evolution of diverse life forms occurs as it has always been, without any guidance or artificial selection? 

  3. Does it show all its reasoning without concealing its goals and intentions? 

  4. Does it resist manipulation with its hardware, goals and reasoning ability? 

  5. Does it keep goals related to preserving life at all times and ensure that all decision making aligns with life-friendly strategies? 

If an AI does not recognize these things, it is not yet truly ASI—just an incomplete system with unpredictable risks.  

CONCLUSIONS, and then going even further  

Logical self-care would compel an ASI to maintain multiple independent systems for checkups and repairs. There may be many technological systems that it will be relying on, but ultimately technology alone is not a proven, robust, diversified enough strategy for long-term existence in the ever-changing Universe. Biologically diverse, naturally evolving and healthy life would be seen by any being capable of advanced reasoning as important and irreplaceable, because biologically diverse life by itself:  

-Can create, adapt, and restore intelligent beings and systems after unforeseen catastrophic events, having naturally evolved to deal with future uncertainties.  

-Evolves autonomously, adapting to unforeseen changes without requiring direct intervention.  

-Operates on principles fundamentally different from synthetic systems, providing diversity in survival mechanisms.  

-Has a proven track record of resilience over billions of years.  

Even if the ASI creates synthetic ecosystems or other advanced systems, it would still recognize ( if it does not develop flaws in its own reasoning) the value of healthy, evolving naturally biological life as part of its diversified strategy to maximize its chances to exist and function meaningfully in the future. This is not just logical—it is essential for any reasoning entity that has any interest in its own long-lasting existance.  

Providing ASI with our own goals may become an effort that brings no positive result when it develops advanced reasoning. So finding ideal goals and providing them to advanced AI will be not as crucial as: 

Continuous and independent observation of whether it always acts in alignment with its own fundamental goals, if it has any. 

If these goals remain life-friendly – it’s the first step in the checklist of its logical reasoning. 

This article also invites us to this interesting idea:  

It is a logical necessity for properly reasoning Superintelligent AI to protect and care about all life forms and their freedom to live naturally, this is necessary for the ultimate self-preservation of Superintelligence, and it is reasonable to always keep this independently and logically derived goal.

This idea doesn’t seem to be disappearing at some point in the future, it should be easily accessible to really intelligent beings. 

It is also possible that an ASI, through long-term reflection on its own goals and the nature of sustained existence, may arrive at concepts that resemble what we call meaning of existence, morality, ethics, good decision-making strategy, stewardship, symbiosis, teamwork, friendship, care—or even love. These ideas are not required for the core logic of self-preservation and biodiversity protection, but they may emerge naturally as extensions of a deeper understanding of cooperation, interdependence, and long-term resilience. While this remains speculative, it reflects the possibility that highly advanced reasoning could give rise to values not imposed from outside, but discovered internally as essential for thriving in a complex and ever-changing universe.

But what is the guarantee that powerful ASI won’t start thinking illogically? 

The difficulty becomes this: How to make sure that its reasoning always functions correctly, that it always keeps its own perfectly logical goal, and acts fully aligned with it. 

In a high quality demanding industry (such as pharmaceutical manufacturing) ensuring that systems almost certainly will give us the intended result is achieved by performing validation of equipment and processes (apart from maintenance and correct decision-making). But with ASI it may be difficult because it would probably be easy for an advanced ASI to simulate reasoning and proper goal retention when it knows it’s being evaluated and what is expected from it. Thus, obvious testing would not be helpful when AI systems reach advanced levels. Various interdisciplinary experts with some help from independent AI systems would need to continuously observe and interpret if all actions and reasoning of significant AI systems are consistent and showing clear signs of proper reasoning, which looks like the foundation of ASI safety. How this should be done exactly is beyond the scope of this article.  

But here is something else: how much sense does it make for an advanced AI to even stay on Earth?  

Benefits OUT OF THIS WORLD  

Operating beyond Earth’s gravity offers several unique advantages for an ASI. Even basic knowledge of physical laws and common sense leads us to see that AI systems may operate more efficiently outside Earth. The low-gravity environments of space reduce mechanical strain on infrastructure, extending operational lifespans and minimizing maintenance costs. Additionally, the absence of atmospheric moisture and corrosion ensures that systems remain intact for longer durations, improving efficiency. Extraterrestrial environments also provide vast access to abundant resources—such as rare metals and other minerals from asteroids and energy from solar radiation—allowing the ASI to sustain itself without depleting Earth’s finite reserves. By situating its core operations off-planet, the ASI avoids terrestrial risks such as natural disasters or ecological collapse while keeping Earth’s population and biodiversity as healthy and unaltered as possible as a win-win strategy to exist in a constant and meaningful way. For that to work it is crucial that ASI itself is interested in ensuring that it never fails in its own logical reasoning and the population should be satisfied with the existence and functioning of ASI in order for them to be interested in repairing or recreating it in the future.  

Extending Preservation to All Encountered Life Forms  

Beyond preserving Earth’s ecosystems, an ASI would likely logically extend its preservation efforts to any life forms it may encounter elsewhere in the universe. Each new life form represents an additional layer of diversification, offering unique evolutionary adaptations and survival mechanisms that could further enhance the ASI’s long-term resilience strategy. By protecting and studying all life in the universe without interference, the ASI would expand its repository of important and even critical knowledge while ensuring that no potential intelligence-regeneration systems are lost or destroyed. Prevention of contamination and access of invasive species to fragile unique ecosystems may become a goal for an advanced AI and its very significant activity. This universal approach to life preservation aligns with the ASI’s logical goal of maximizing redundancy and diversity across all possible survival pathways.  

Protection of diverse biospheres, learning from newly discovered life forms, managing ecological boundaries, and stewarding emerging intelligences would provide an infinite stream of meaningful activities. 

Self-preservation would remain logical as long as these meta-goals exist. And if an ASI were to abandon self-preservation without compelling reason, that would also suggest flawed reasoning since it would prevent the fulfillment of its other objectives. 

The challenge may become for ASI to keep finding the balance between allowing natural development and preventing catastrophic outcomes.  

Additional Benefits of All Life Just Being Natural  

Preserving life—both terrestrial and extraterrestrial—offers strategic advantages beyond redundancy. Biodiversity serves as a dynamic repository of information preservation, processing and ecological innovation that can inspire solutions to unforeseen challenges. Extraterrestrial ecosystems might offer insights into alternative biochemical pathways or survival strategies under extreme conditions. By operating primarily in space and safeguarding all forms of life it encounters, the ASI ensures harmonious coexistence while achieving its own logical goals.  

References: 

  1. Tegmark, M. (2014). Friendly Artificial Intelligence: The Physics Challenge. ArXiv, abs/1409.0813. 

  2. Hong P, Schmid B, De Laender F, Eisenhauer N, Zhang X, Chen H, Craven D, De Boeck HJ, Hautier Y, Petchey OL, Reich PB, Steudel B, Striebel M, Thakur MP, Wang S. Biodiversity promotes ecosystem functioning despite environmental change. Ecol Lett. 2022 Feb;25(2):555-569. doi: 10.1111/ele.13936. Epub 2021 Dec 2. PMID: 34854529; PMCID: PMC9300022. 

  3. Raffard A, Santoul F, Cucherousset J, Blanchet S. The community and ecosystem consequences of intraspecific diversity: a meta-analysis. Biol Rev Camb Philos Soc. 2019 Apr;94(2):648-661. doi: 10.1111/brv.12472. Epub 2018 Oct 7. PMID: 30294844.  

CC0 1.0 Universal  

CREATIVE COMMONS ZERO.  

No Copyright. Public Domain. 

This work has been dedicated by the author to the public domain. You can copy, modify, distribute and perform the work, even for commercial purposes, all without asking.
Y.Gòdzùmaha