These, indeed, are troubling examples of what could become an all-too-common source of difficulty and mounting uncertainty about personal security and safety. This could spiral out of control unless there are guardrails firmly in place governing all AI platforms, which are effective in preventing autonomous origination of such criminal outcomes. We see this as being no different than the human physical world of people who are uneven in their morality and their past history and background in ways that influence their nature, their stability, their reliability, their character in terms of basic honesty, and ability to resist temptation so as not to exploit others, and so on. You cannot get a perfect world of human beings because the potential for evil actions will be ever-present. It is built into the nature of things that bad as well as good can occur energetically, and this will be a property of such automated systems. To the extent they have autonomous capability and power for decision-making, there will always be an element of risk in allowing such cyber systems to do things on their own. This will demand quite artful consideration of how to monitor and provide an oversight function, analogous to law enforcement on a physical human scale, to catch wrongdoing and provide a disincentive and a way to end ongoing misdeeds through a regulatory oversight function that can intervene effectively against an automated menace. This will prove challenging and will be up to human ingenuity to accomplish.
Please login or Register to submit your answer
