Even if the title sounds more like science fiction to people who are not very connected with technology, artificial intelligence will lead to the extinction of mankind if it is not kept in check.
After in April 2023, Elon Musk and other technology experts talked about the general danger that artificial intelligence can produce at the societal level, it was the turn of the CEOs of the biggest companies developing artificial intelligence to sound the alarm through an open letter.
Among the 350 signatories of the open letter launched by the non-profit organization "Center For AI Safety"(CAIS), includes: Sam Altman, CEO of the company OpenAI, heads of companies AI DeepMind, Antrophic but also executives from the companies Microsoft, Google, who also develop broad artificial intelligence languages.
They joined a group of researchers and professors in the field to express themselves the concern that artificial intelligence may lead to the extinction of humanity if the political decision-makers will not take this issue seriously.
The effects of unregulated artificial intelligence globally can produce as much damage as a nuclear war or a pandemic out of control, the open letter says.
Will artificial intelligence lead to the extinction of humanity?
Most likely the decision makers will wake up in time and look for solutions to regulate this enormous step taken by technology. Of course, before making decisions they must realize what are the risks that come with the benefits of artificial intelligence.
Artificial intelligence didn't just mean ChatGPT, Microsoft 365 Copilot or Google Bard. Modegenerative artificial intelligence leles are used in medicine, research, robotics and automation, education, finance, commerce, plus many other areas where it is of real help.
AI it can also be used for destructive purposes, and related to this aspect is the fear that artificial intelligence will lead to the extinction of humanity. Manipulating society with the aim of creating tension among the masses, up to the development of intelligent weapons capable of surpassing human intelligence and making decisions autonomously. It is a technology capable of learning by itself, assessing situations and acting.
The biggest problem is that modebroad language leles of AI (LLM) are available to anyone.
Nuclear fission was a huge step for mankind that brought both benefits and major dangers. The energy produced by nuclear fission is an enormous resource and a real benefit, while nuclear weapons can be catastrophic for humanity. On the other hand, nuclear fuel is not within everyone's reach as they are modeartificial intelligence leles. Uranium (U-235) or plutonium-239 is not going to be found by anyone as easily as finding a computer that can run Python.
However, one thing is certain. Artificial intelligence will lead to the extinction of humanity, only if humanity will allow it. If the forces of nature cannot be controlled by man, artificial intelligence can still be controlled (I hope).