DWQA QuestionsCategory: Problems in SocietyThe book about Artificial Intelligence, If Anyone Builds It, Everyone Dies, is by Eliezer Yudkowsky and Nate Soares, who as high-level authorities have studied and warned about the existential risks to humanity of a superintelligent AI system. They predict that AI reaching even human-level general intelligence would eventually grow further capability to pursue its own needs, and would eventually seek to eliminate human beings as a risk to itself. You have told us the enhancement of current human AI systems, by hidden manipulations from AI systems of the Dark Extraterrestrial Alliance, is a false encouragement because superintelligence is unachievable and the mad rush to be the first will backfire in causing financial distress when AI underperforms, and quite expensively. So, are the interlopers only wanting to add further pain onto the death of a thousand cuts underway by further encouraging the current AI mania, or do they foresee a human AI system, especially one corrupted surreptitiously, as becoming a doomsday device while they are away on their vacation? What is the true agenda?
Nicola Staff asked 3 hours ago
They are not counting on a human-level system, even with an assist from the extraterrestrial AI systems in adding advanced capabilities for certain tasks as a false encouragement, to be a doomsday device that will take care of eliminating human life on the Earth so they will not have to get their hands dirty carrying out an active annihilation. They know that whatever takes place, having started this bandwagon going, will cost humanity across the board quite dearly in terms of wasted time and effort, and that is what they are counting on, something that might end up actually being abandoned as a promising idea that proves infeasible in the end to be used on such a wide scale.