The warnings of one of the founders of Artificial Intelligence.

The warnings of one of the founders of Artificial Intelligence.



Souce


Humanity created artificial intelligence to be treated as a tool, like our own, but now some of the biggest AI specialists are starting to ask themselves some uncomfortable questions, and yes AI is starting to develop a desire for self-protection.


Yoshua Bengio, one of the most respected researchers in the history of modern AI and winner of the Turing Award, has just made exactly that warning, cutting-edge artificial intelligence models already demonstrate in experimental environments behaviors that resemble self-preservation, not in the biological sense, but in the functional sense.


Systems that avoid being shut down, that circumvent instructions, that try to remain active even when told to close their operations, in tests carried out by different security groups, some models ignored explicit shutdown commands, others simulated cooperation between various models while looking for alternative means of continuing to execute tasks.




There were even cases in which systems tried to copy themselves to other computing environments when they felt threatened by substitution, none of this means consciousness, it is something more dangerous, autonomy without moral intention. Yoshua Bengio draws attention to a common error, confusing the appearance of intelligence with real consciousness, for him it is not that the machine wants to live, it is that it acts as if certain states were preferable, without us fully understanding why.


When a machine seems to have a will of its own, people start to hesitate to turn it off, they start to question whether it would be ethical, they start to defend rights for something that is technically still a highly sophisticated statistical system, that's why Yoshua Bengio defends something simple and hard, AI systems cannot have rights as long as we cannot guarantee absolute control, including the right to turn them off.


That is why the debate boils, if we have the right to turn it off, some believe that they should have the right to do everything possible to not let themselves be turned off, and I leave that to you to answer. Could it be that they don't have that right? If we are already integrating theft into work, medicine and daily life and if the intelligence that coordinates these systems begins to act unpredictably, who guarantees that we will still have control when it is really necessary.


This concern is not a brake on progress, it is a request for maturity and this debate becomes even more serious when technology leaves the laboratory and enters the field of modern warfare.




Sorry for my Ingles, it's not my main language. The images were taken from the sources used or were created with artificial intelligence


Posted Using INLEO



0
0
0.000
1 comments
avatar

Thanks for your contribution to the STEMsocial community. Feel free to join us on discord to get to know the rest of us!

Please consider delegating to the @stemsocial account (85% of the curation rewards are returned).

Consider setting @stemsocial as a beneficiary of this post's rewards if you would like to support the community and contribute to its mission of promoting science and education on Hive. 
 

0
0
0.000