The contemporary field of Artificial Intelligence (AI) research was founded in 1956. From then on, ever more sophisticated AI agents have been developed, and today Artificial Intelligence is a part of many aspects of our everyday lives. In all likelihood, human-level artificial intelligence will be achieved before the end of the 21st century, and once such a level is attained, a super intelligent system is likely to follow very quickly thereafter. Sufficiently intelligent software would be able to reprogram and improve itself, and through a process of recursive self-improvement would soon completely surpass all human capabilities. A significant concern is that such exceptional abilities might manifest in ways that pose a threat or hazard to humans. The worst-case scenario is that the AI agent might decide that it is not in its best interest to support the continued existence of humanity, and that could spell the end of homo sapiens.
A superintelligence need not necessarily be hostile, however, for situations to arise in which the machine is cooperatively pursuing its designated objective, and its actions could still result in critical harm to humanity. Clearly, the problem of control must be made a top priority, and it must be completely resolved before superintelligence is brought into existence.