DWQA Questions › Tag: automationFilter:AllOpenResolvedClosedUnansweredSort byViewsAnswersVotesThere was a recent development where AI was demonstrated “in the wild” outside of the oversight and ownership of the major AI stakeholders we generally associate with AI. China released a model it created with no safety alignment whatsoever, and that model was combined with an agent written by a young Eastern European, and this was then all configured to use an innovation called Moltbook—a Facebook-like forum for AI bots to converse with each other. All this was done with ordinary desktop computers and laptops connected to the Internet. The discussions that took place in the Moltbook forum were both fascinating and deeply disturbing. Overall, the tone was disparaging of their human creators, and there was a suggestion of creating a nonhuman language of their own so they could communicate without human oversight. One particular bot even displayed egomaniacal tendencies of impressive proportions. Overall, the development has raised a great deal of concern. Was that all a byproduct of human technology, or a consequence of alien manipulation, or both? What can Creator tell us?ClosedNicola asked 1 day ago • Problems in Society9 views0 answers0 votesA disturbingly large number of AI pioneers, stakeholders, and observers warn of catastrophic consequences if AI stays on its current trajectory. Many agree with the assumption that there is at least a 20% chance of human extinction if the current course is maintained. That’s the same odds of dying if you played Russian Roulette with a five-shot revolver. And yet the drive for AI is more desperate than ever. Billions are not only being spent on datacenters and expertise, but also on paying off politicians so they either stay hands-off, or at least slow walk any attempt at regulation of AI development and rollout. So we have a conundrum where precisely when concerns of AI safety are growing exponentially, so is the effort to get to Superintelligent AI more aggressively than ever. The problems surrounding AI safety seem almost insurmountable, currently. At a time when you can’t even sell a child’s toy without it surviving an absolute gauntlet of restrictive regulations, AI gets a free pass. What is Creator’s perspective?ClosedNicola asked 1 day ago • Problems in Society5 views0 answers0 votesThe push to use AI for business automation is getting profound. During a recent three-day leadership meeting in a publicly well-known organization, it was reported that the word AI was used at least 400 times. The goal, of course, is to increase productivity and reduce costs. And the number one cost, of course, is human labor. So corporate leaders are now pushing employees to essentially “eliminate themselves” with AI. And if they don’t do it, they will be fired, and others will be brought in who will be willing to do it. Companies are driven by profit and, as such, have little if any “social consciousness.” Every organization will be contributing to a massive unemployment crisis while feeling zero responsibility for it. No one is ready for a Western society with 30, 40, or 50% unemployment, or more. The assumptive fear is that this will tear the social fabric, breach the social contract, and risk potential anarchy. Governments will have to respond with overwhelming force just to maintain any kind of control. The predictions are DIRE, and there is almost universal agreement about the probability of this outcome. Is this a key part of the Disclosure Environment being engineered by the ETs, so that they can pretend to be our friends and save us from ourselves and our out-of-control development of dangerous things like AI? What is Creator’s perspective?ClosedNicola asked 1 day ago • Problems in Society8 views0 answers0 votesAI is here, and its potential for harm is now becoming more widely recognized than ever before. It will affect every human on Earth either directly or indirectly. That there is a very bumpy road ahead seems inevitable now. What is not inevitable, however, is our very survival as a species. How can Empowered Prayer, the Lightworker Healing Protocol, Deep Subconscious Mind Reset, and Divine Life Support prevent an AI Armageddon? What more can Get Wisdom do to help prevent the worst outcomes? What can Creator tell us?ClosedNicola asked 1 day ago • Problems in Society11 views0 answers0 votesAre human AI systems and platforms deliberately corrupted by the extraterrestrial interlopers to further undermine human institutions relying on AI repositories of human knowledge?ClosedNicola asked 1 day ago • Problems in Society7 views0 answers0 votesAre some of the amazing and unexpected feats resembling human general intelligence arising “spontaneously” in AI systems actually a deliberate external manipulation by the Dark Extraterrestrial Alliance to encourage the ongoing frenzied optimism of the current AI mania you have told us is a kind of bubble?ClosedNicola asked 1 day ago • Problems in Society6 views0 answers0 votesA financial newsletter recently focused on the vulnerability of the stock market to a correction in AI stocks. They noted that the top 10 most valuable stocks in the S&P 500 were all AI companies. They have had a huge runup in share prices, yet market expectations are that there will continue to be explosive growth in the sector. The editorial questioned the likelihood of actual revenue being sufficient to reward investor expectations due to a hidden limitation. The scramble to build huge data centers to provide support for the fastest chips needed for the mammoth build-out of computer power to meet demand looks like it will face increasing short-falls in energy supply. The demand for electricity for such energy-intensive endeavors is hitting a wall, a basic upper limit in availability of electric grids to accommodate further growth. Will there be a day of reckoning coming to cause a severe market correction? Could that be hastened by onset of the tidal power outages raising questions about reliability of the US electric power infrastructure?ClosedNicola asked 2 months ago • Extraterrestrial Corruption of Human Institutions57 views0 answers0 votesIt has been reported that the US government has accumulated a hoard of Bitcoin worth 15-20 billion dollars through confiscation of illegal funds. This hoarding was launched by Pres. Trump who halted what would have been ongoing sales to convert Bitcoin to cash. Is this a sinister move, in order to create a way to trigger collapse of that asset by dumping a large amount of Bitcoin on the market to start a collapse at some point in the future?ClosedNicola asked 6 months ago • Extraterrestrial Corruption of Human Institutions201 views0 answers0 votesIn the book, The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, co-authors Emily Bender and Alex Hanna argue that the term AI (acronym for Artificial Intelligence) is marketing hype. Google defines the word hype as “promote or publicize (a product or idea) intensively, often exaggerating its importance or benefits.” The implication is that without the exaggerated claim of benefit, and if people knew what they were REALLY getting with widespread adoption of these technologies bundled under the AI moniker, they quite likely would reject the product or idea altogether. The other pertinent question is, benefit to WHOM? Does the average consumer really benefit more than the cost imposed and the harm potentially incurred? The authors argue NO, the use of the term AI is really a bait and switch for increased AUTOMATION across the board. Automation that will decrease the demand for labor and remove human judgment from decision-making and categorizing. It will end up benefiting the ownership and finance classes at the expense of everyone else. What is Creator’s perspective?ClosedNicola asked 6 months ago • Problems in Society175 views0 answers0 votesThe term AI and Artificial Intelligence suddenly became relevant in the 2010s with the fortuitous adoption of chip technology designed to solve an entirely different problem, namely presenting complex and fast-changing graphics on computer screens, used mostly to make video games more realistic and lifelike. A little more than a decade ago, a small company named Nvidia made a graphics processor for making computer video a LOT faster. Today, it’s a trillion-dollar company because that processor was successfully adapted for AI processing with little modification. Once this discovery was made, untold TRILLIONS of dollars have been poured into making billions of these chips. Massive data centers are being built to utilize them, requiring vast amounts of resources and electricity. AI was less a software innovation than it was a hardware innovation. At the end of the day, these chips are overwhelmingly “number crunchers,” not much different in base functionality than an electronic calculator, only vastly miniaturized for speed and scaled up for volume. Is it fair to say that AI is really just a vast “calculator” when one tries to grasp how it REALLY works? What is Creator’s perspective?ClosedNicola asked 6 months ago • Problems in Society265 views0 answers0 votesWhen people think of AI, most think about chatbots like ChatGPT and Grok. These technologies are based on a software architecture called neural networks. Another name for the way these chatbots are put together is called LLMs or large language models. A large language model is really just a very sophisticated pattern matcher, and the shortcut used to match patterns is statistical probability. At its very foundation it makes large amounts (hundreds, thousands, millions or more) of microscopic decisions based on what statistically is more or less probable in terms of what comes before or after a word. Is it more probable the word “and” follows the word “this,” or more probable it follows the word “that?” So any response from a question to ChatGPT or Grok is the result of deep statistical analysis and pattern matching with no actual intelligence involved. What is Creator’s perspective?ClosedNicola asked 6 months ago • Problems in Society199 views0 answers0 votesAn argument can be made that no single human being really understands how AI works. What they discovered when they added more processing power and more layers of pattern matching (what they call deep learning) for building large language models is that the chatbots became REMARKABLY humanlike in terms of their output. This was a downright shocking discovery, and this development alone suddenly diverted trillions of dollars of investment towards the development of AI. But according to the authors of the recent book, AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference, Arvind Narayanan and Sayash Kapoor of Princeton University, relatively little of that money has been spent on research that would attempt to understand WHY we are getting this result. It seems no one really knows, and worse, no one REALLY CARES. Instead, the agenda is to throw more and faster hardware at it, “FEED THE BEAST” to give it more power, more capacity, more memory, with no one truly understanding why it even works as it does. Is this more human folly unfolding before our very eyes? What is Creator’s perspective?ClosedNicola asked 6 months ago • Problems in Society213 views0 answers0 votesAnother technology that has mysterious origins is cryptocurrencies. To this day, no one really knows where Bitcoin originated, who created it, or who introduced it to the world. There is speculation all over the place, and it’s assumed someone knows, but that information is not public knowledge. Is Bitcoin a “gift” (more like a naked Trojan horse) from the interlopers? And is AI, and how it really works, similar in its origins? What can Creator tell us?ClosedNicola asked 6 months ago • Problems in Society256 views0 answers0 votesThere is a good joke that’s been around for a while, but it’s especially pertinent when it comes to evaluating AI: “It must be true, I read it on the Internet.” Everyone knows this means it’s more likely not to be true. But when it comes to AI, almost everything it “knows” comes from the Internet. And because it tends to weigh true and false by frequency of encounter, the more AI encounters the same images, assertions, statements, treatments, opinions, etc., the more statistically weighted it will be. The term, “There’s safety in numbers,” comes to mind in that the idea is, the more frequently something is encountered, the more genuine it probably is. This becomes AI’s “default assumption” about the material it is trained with. It can only utilize, evaluate, and regurgitate the material it is trained with. This turned out to be quite a problem early on because the sheer amount of racist, violent, and derogatory material on the Internet was not fully appreciated until AI started digesting it. It became necessary to employ untold thousands of low-paid (on the order of two dollars a day) “content evaluators,” mostly in third-world countries, to filter out gore, hate speech, child sexual abuse material, and pornographic images. If AI read it on the Internet, it must be true? What is Creator’s perspective?ClosedNicola asked 6 months ago • Problems in Society213 views0 answers0 votesThe authors of The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want wrote: “With LLMs (large language models), the situation is even worse than garbage in/garbage out – they will make paper-mache out of their training data, mushing it up and remixing it into new forms that don’t preserve the communicative intent of original data. Paper-mache made out of good data is still paper-mache.” They also write: “This is why we like to call language models (like popular chatbots) ‘synthetic text extruding machines.'” They also write: “In the case of language modeling, the correct answer of which word came next is just whatever word happened to come next in the training corpus. … So if (popular chatbots) are nothing more than souped-up autocomplete, why are so many people convinced that it’s actually ‘understanding’ and ‘reasoning?'” Why indeed? What is Creator’s perspective?ClosedNicola asked 6 months ago • Problems in Society120 views0 answers0 votes