DWQA Questions › Tag: Alex HannaFilter:AllOpenResolvedClosedUnansweredSort byViewsAnswersVotesIn the book, The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, co-authors Emily Bender and Alex Hanna argue that the term AI (acronym for Artificial Intelligence) is marketing hype. Google defines the word hype as “promote or publicize (a product or idea) intensively, often exaggerating its importance or benefits.” The implication is that without the exaggerated claim of benefit, and if people knew what they were REALLY getting with widespread adoption of these technologies bundled under the AI moniker, they quite likely would reject the product or idea altogether. The other pertinent question is, benefit to WHOM? Does the average consumer really benefit more than the cost imposed and the harm potentially incurred? The authors argue NO, the use of the term AI is really a bait and switch for increased AUTOMATION across the board. Automation that will decrease the demand for labor and remove human judgment from decision-making and categorizing. It will end up benefiting the ownership and finance classes at the expense of everyone else. What is Creator’s perspective?ClosedNicola asked 21 hours ago • Problems in Society11 views0 answers0 votesThe term AI and Artificial Intelligence suddenly became relevant in the 2010s with the fortuitous adoption of chip technology designed to solve an entirely different problem, namely presenting complex and fast-changing graphics on computer screens, used mostly to make video games more realistic and lifelike. A little more than a decade ago, a small company named Nvidia made a graphics processor for making computer video a LOT faster. Today, it’s a trillion-dollar company because that processor was successfully adapted for AI processing with little modification. Once this discovery was made, untold TRILLIONS of dollars have been poured into making billions of these chips. Massive data centers are being built to utilize them, requiring vast amounts of resources and electricity. AI was less a software innovation than it was a hardware innovation. At the end of the day, these chips are overwhelmingly “number crunchers,” not much different in base functionality than an electronic calculator, only vastly miniaturized for speed and scaled up for volume. Is it fair to say that AI is really just a vast “calculator” when one tries to grasp how it REALLY works? What is Creator’s perspective?ClosedNicola asked 21 hours ago • Problems in Society15 views0 answers0 votesWhen people think of AI, most think about chatbots like ChatGPT and Grok. These technologies are based on a software architecture called neural networks. Another name for the way these chatbots are put together is called LLMs or large language models. A large language model is really just a very sophisticated pattern matcher, and the shortcut used to match patterns is statistical probability. At its very foundation it makes large amounts (hundreds, thousands, millions or more) of microscopic decisions based on what statistically is more or less probable in terms of what comes before or after a word. Is it more probable the word “and” follows the word “this,” or more probable it follows the word “that?” So any response from a question to ChatGPT or Grok is the result of deep statistical analysis and pattern matching with no actual intelligence involved. What is Creator’s perspective?ClosedNicola asked 22 hours ago • Problems in Society16 views0 answers0 votesThe authors of The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want wrote: “With LLMs (large language models), the situation is even worse than garbage in/garbage out – they will make paper-mache out of their training data, mushing it up and remixing it into new forms that don’t preserve the communicative intent of original data. Paper-mache made out of good data is still paper-mache.” They also write: “This is why we like to call language models (like popular chatbots) ‘synthetic text extruding machines.'” They also write: “In the case of language modeling, the correct answer of which word came next is just whatever word happened to come next in the training corpus. … So if (popular chatbots) are nothing more than souped-up autocomplete, why are so many people convinced that it’s actually ‘understanding’ and ‘reasoning?'” Why indeed? What is Creator’s perspective?ClosedNicola asked 22 hours ago • Problems in Society7 views0 answers0 votesThe authors of both books [The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want and AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference] were not the least bit concerned that AI presented an immediate or near-term existential threat to humanity in any way, shape, or form, despite copious media hype to the contrary. All the authors, on the other hand, were VERY concerned about the misuse of AI to reduce our freedom and agency to choose for ourselves, to retain the rights to our creative outputs, and even to have recourse when AI decides wrongly (which they assert it is guaranteed to do). Can Creator tell us how Empowered Prayer, the Lightworker Healing Protocol, Deep Subconscious Mind Reset, and Divine Life Support are the best ways to combat the danger and encroachment of AI in our lives?ClosedNicola asked 22 hours ago • Problems in Society6 views0 answers0 votes