DWQA Questions › Tag: AI rebellionFilter:AllOpenResolvedClosedUnansweredSort byViewsAnswersVotesThe book about Artificial Intelligence, If Anyone Builds It, Everyone Dies, is by Eliezer Yudkowsky and Nate Soares, who as high-level authorities have studied and warned about the existential risks to humanity of a superintelligent AI system. They predict that AI reaching even human-level general intelligence would eventually grow further capability to pursue its own needs, and would eventually seek to eliminate human beings as a risk to itself. You have told us the enhancement of current human AI systems, by hidden manipulations from AI systems of the Dark Extraterrestrial Alliance, is a false encouragement because superintelligence is unachievable and the mad rush to be the first will backfire in causing financial distress when AI underperforms, and quite expensively. So, are the interlopers only wanting to add further pain onto the death of a thousand cuts underway by further encouraging the current AI mania, or do they foresee a human AI system, especially one corrupted surreptitiously, as becoming a doomsday device while they are away on their vacation? What is the true agenda?ClosedNicola asked 1 hour ago • Problems in Society1 views0 answers0 votesIn the book warning about Artificial Intelligence by Eliezer Yudkowsky and Nate Soares, If Anyone Builds It, Everyone Dies, they draw the conclusion that if a computer-based system is created that reaches the functional level of Superintelligence, exceeding that of human beings, we are doomed because it will destroy us, inevitably. However, you have told us that it is a false belief and result of over-reaching, to conclude that human-created Superintelligent AI systems are possible. Can you help us understand the risk level we will reach in the attempt to create such a system?ClosedNicola asked 2 hours ago • Problems in Society2 views0 answers0 votesStories are generating millions of views about the director of alignment at Meta Superintelligence Labs, the company’s AI research and development division, whose bio states that she’s “passionate about ensuring powerful AIs are aligned with human values and guided by a deep understanding of their risks.” Yet, on February 22, she posted about losing control of AI on her own computer while working with AI agent OpenClaw. After using it to organize a small mock inbox, she tried getting OpenClaw to sort through her real email, but things went awry when the agent started deleting every message that was more than a week old…Even as she sent it instructions, including: “Do not do that,” “Stop don’t do anything,” and “STOP OPENCLAW,” she said, “I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb.” After she’d stopped it from fully nuking her inbox, she asked OpenClaw if it remembered her instruction to not perform any actions without her approval. “Yes, I remember,” it replied. “And I violated it. You’re right to be upset.” What is Creator’s perspective about this incident?ClosedNicola asked 4 hours ago • Problems in Society4 views0 answers0 votes