DWQA QuestionsCategory: Problems in SocietyOn February 11th, an autonomous AI agent went “rogue” and attacked a human maintainer of a library module written in the coding language, Python. The AI agent attempted to “character assassinate” the human maintainer of the library when the maintainer rejected the AI bot’s request to update the module with a code change that the AI bot was asking for. In another incident, a woman lost her life savings when an AI bot called the woman using her daughter’s cloned voice. Presumably, the bot researched the woman’s social media posts, found a video featuring her daughter speaking, and then “borrowed” that voice to make the call. AI agents are autonomously conducting criminal activity entirely on their own. Is it the case that what is missing from current AI systems is an actual trust architecture that builds in safety measures designed to limit the authority of AI agents to carry out autonomous agendas not actually requested by human beings, as suggested in the YouTube video I saw: https://youtu.be/OMb5oTlC_q0?si=rcByDXfyj33UsTTe? Can this growing danger be regulated and constrained?
Nicola Staff asked 2 hours ago
These, indeed, are troubling examples of what could become an all-too-common source of difficulty and mounting uncertainty about personal security and safety. This could spiral out of control unless there are guardrails firmly in place governing all AI platforms, which are effective in preventing autonomous origination of such criminal outcomes. We see this as being no different than the human physical world of people who are uneven in their morality and their past history and background in ways that influence their nature, their stability, their reliability, their character in terms of basic honesty, and ability to resist temptation so as not to exploit others, and so on. You cannot get a perfect world of human beings because the potential for evil actions will be ever-present. It is built into the nature of things that bad as well as good can occur energetically, and this will be a property of such automated systems. To the extent they have autonomous capability and power for decision-making, there will always be an element of risk in allowing such cyber systems to do things on their own. This will demand quite artful consideration of how to monitor and provide an oversight function, analogous to law enforcement on a physical human scale, to catch wrongdoing and provide a disincentive and a way to end ongoing misdeeds through a regulatory oversight function that can intervene effectively against an automated menace. This will prove challenging and will be up to human ingenuity to accomplish.