DWQA Questions › Tag: AI maniaFilter:AllOpenResolvedClosedUnansweredSort byViewsAnswersVotesThe book about Artificial Intelligence, If Anyone Builds It, Everyone Dies, is by Eliezer Yudkowsky and Nate Soares, who as high-level authorities have studied and warned about the existential risks to humanity of a superintelligent AI system. They predict that AI reaching even human-level general intelligence would eventually grow further capability to pursue its own needs, and would eventually seek to eliminate human beings as a risk to itself. You have told us the enhancement of current human AI systems, by hidden manipulations from AI systems of the Dark Extraterrestrial Alliance, is a false encouragement because superintelligence is unachievable and the mad rush to be the first will backfire in causing financial distress when AI underperforms, and quite expensively. So, are the interlopers only wanting to add further pain onto the death of a thousand cuts underway by further encouraging the current AI mania, or do they foresee a human AI system, especially one corrupted surreptitiously, as becoming a doomsday device while they are away on their vacation? What is the true agenda?ClosedNicola asked 3 hours ago • Problems in Society5 views0 answers0 votes