DWQA Questions › Tag: Eliezer YudkowskyFilter:AllOpenResolvedClosedUnansweredSort byViewsAnswersVotesThe book about Artificial Intelligence, If Anyone Builds It, Everyone Dies, is by Eliezer Yudkowsky and Nate Soares, who as high-level authorities have studied and warned about the existential risks to humanity of a superintelligent AI system. They predict that AI reaching even human-level general intelligence would eventually grow further capability to pursue its own needs, and would eventually seek to eliminate human beings as a risk to itself. You have told us the enhancement of current human AI systems, by hidden manipulations from AI systems of the Dark Extraterrestrial Alliance, is a false encouragement because superintelligence is unachievable and the mad rush to be the first will backfire in causing financial distress when AI underperforms, and quite expensively. So, are the interlopers only wanting to add further pain onto the death of a thousand cuts underway by further encouraging the current AI mania, or do they foresee a human AI system, especially one corrupted surreptitiously, as becoming a doomsday device while they are away on their vacation? What is the true agenda?ClosedNicola asked 3 hours ago • Problems in Society6 views0 answers0 votesIn the book warning about Artificial Intelligence by Eliezer Yudkowsky and Nate Soares, If Anyone Builds It, Everyone Dies, they draw the conclusion that if a computer-based system is created that reaches the functional level of Superintelligence, exceeding that of human beings, we are doomed because it will destroy us, inevitably. However, you have told us that it is a false belief and result of over-reaching, to conclude that human-created Superintelligent AI systems are possible. Can you help us understand the risk level we will reach in the attempt to create such a system?ClosedNicola asked 4 hours ago • Problems in Society9 views0 answers0 votes