The AI containment problem




Elon Musk plans to build his Telsa Bot, Optimus, so that humans “can run away from it and most likely overpower it” should they ever need to. “Hopefully, that doesn’t ever happen, but you never know,” says Musk. But is this really enough to make an AI safe? The problem of keeping AI contained, and only doing the things we want it to, is a deceptively tricky one, writes Roman V. Yampolskiy.  With the likely development of superintelligent programs in the near future, many scientists have raised the issue of safety as it relates to such technology. A common theme in Artificial Intellgence (AI) safety research is the possibility of keeping a super-intelligent agent in a sealed hardware so as to prevent it from doing any harm to humankind.In this essay we will review specific proposals aimed at creating restricted environments for safely interacting with artificial minds. We will evaluate feasibility of presented proposals and suggest a protocol aimed at enhancing safety and security of su…

Originally appeared on iai News RSS feed Read More



The Bankruptcy of Evolutionism

Evolutionism is a “scientific theory”.  By that we usually mean that it is a doctrine that abstracts “scientifically” data, or...