DWQA Questions › Tag: natureFilter:AllOpenResolvedClosedUnansweredSort byViewsAnswersVotesIt has been reported that the US government has accumulated a hoard of Bitcoin worth 15-20 billion dollars through confiscation of illegal funds. This hoarding was launched by Pres. Trump who halted what would have been ongoing sales to convert Bitcoin to cash. Is this a sinister move, in order to create a way to trigger collapse of that asset by dumping a large amount of Bitcoin on the market to start a collapse at some point in the future?ClosedNicola asked 2 hours ago • Extraterrestrial Corruption of Human Institutions5 views0 answers0 votesIn the book, The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, co-authors Emily Bender and Alex Hanna argue that the term AI (acronym for Artificial Intelligence) is marketing hype. Google defines the word hype as “promote or publicize (a product or idea) intensively, often exaggerating its importance or benefits.” The implication is that without the exaggerated claim of benefit, and if people knew what they were REALLY getting with widespread adoption of these technologies bundled under the AI moniker, they quite likely would reject the product or idea altogether. The other pertinent question is, benefit to WHOM? Does the average consumer really benefit more than the cost imposed and the harm potentially incurred? The authors argue NO, the use of the term AI is really a bait and switch for increased AUTOMATION across the board. Automation that will decrease the demand for labor and remove human judgment from decision-making and categorizing. It will end up benefiting the ownership and finance classes at the expense of everyone else. What is Creator’s perspective?ClosedNicola asked 3 hours ago • Problems in Society4 views0 answers0 votesThe term AI and Artificial Intelligence suddenly became relevant in the 2010s with the fortuitous adoption of chip technology designed to solve an entirely different problem, namely presenting complex and fast-changing graphics on computer screens, used mostly to make video games more realistic and lifelike. A little more than a decade ago, a small company named Nvidia made a graphics processor for making computer video a LOT faster. Today, it’s a trillion-dollar company because that processor was successfully adapted for AI processing with little modification. Once this discovery was made, untold TRILLIONS of dollars have been poured into making billions of these chips. Massive data centers are being built to utilize them, requiring vast amounts of resources and electricity. AI was less a software innovation than it was a hardware innovation. At the end of the day, these chips are overwhelmingly “number crunchers,” not much different in base functionality than an electronic calculator, only vastly miniaturized for speed and scaled up for volume. Is it fair to say that AI is really just a vast “calculator” when one tries to grasp how it REALLY works? What is Creator’s perspective?ClosedNicola asked 3 hours ago • Problems in Society4 views0 answers0 votesWhen people think of AI, most think about chatbots like ChatGPT and Grok. These technologies are based on a software architecture called neural networks. Another name for the way these chatbots are put together is called LLMs or large language models. A large language model is really just a very sophisticated pattern matcher, and the shortcut used to match patterns is statistical probability. At its very foundation it makes large amounts (hundreds, thousands, millions or more) of microscopic decisions based on what statistically is more or less probable in terms of what comes before or after a word. Is it more probable the word “and” follows the word “this,” or more probable it follows the word “that?” So any response from a question to ChatGPT or Grok is the result of deep statistical analysis and pattern matching with no actual intelligence involved. What is Creator’s perspective?ClosedNicola asked 3 hours ago • Problems in Society5 views0 answers0 votesAn argument can be made that no single human being really understands how AI works. What they discovered when they added more processing power and more layers of pattern matching (what they call deep learning) for building large language models is that the chatbots became REMARKABLY humanlike in terms of their output. This was a downright shocking discovery, and this development alone suddenly diverted trillions of dollars of investment towards the development of AI. But according to the authors of the recent book, AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference, Arvind Narayanan and Sayash Kapoor of Princeton University, relatively little of that money has been spent on research that would attempt to understand WHY we are getting this result. It seems no one really knows, and worse, no one REALLY CARES. Instead, the agenda is to throw more and faster hardware at it, “FEED THE BEAST” to give it more power, more capacity, more memory, with no one truly understanding why it even works as it does. Is this more human folly unfolding before our very eyes? What is Creator’s perspective?ClosedNicola asked 3 hours ago • Problems in Society3 views0 answers0 votesAnother technology that has mysterious origins is cryptocurrencies. To this day, no one really knows where Bitcoin originated, who created it, or who introduced it to the world. There is speculation all over the place, and it’s assumed someone knows, but that information is not public knowledge. Is Bitcoin a “gift” (more like a naked Trojan horse) from the interlopers? And is AI, and how it really works, similar in its origins? What can Creator tell us?ClosedNicola asked 3 hours ago • Problems in Society3 views0 answers0 votesThere is a good joke that’s been around for a while, but it’s especially pertinent when it comes to evaluating AI: “It must be true, I read it on the Internet.” Everyone knows this means it’s more likely not to be true. But when it comes to AI, almost everything it “knows” comes from the Internet. And because it tends to weigh true and false by frequency of encounter, the more AI encounters the same images, assertions, statements, treatments, opinions, etc., the more statistically weighted it will be. The term, “There’s safety in numbers,” comes to mind in that the idea is, the more frequently something is encountered, the more genuine it probably is. This becomes AI’s “default assumption” about the material it is trained with. It can only utilize, evaluate, and regurgitate the material it is trained with. This turned out to be quite a problem early on because the sheer amount of racist, violent, and derogatory material on the Internet was not fully appreciated until AI started digesting it. It became necessary to employ untold thousands of low-paid (on the order of two dollars a day) “content evaluators,” mostly in third-world countries, to filter out gore, hate speech, child sexual abuse material, and pornographic images. If AI read it on the Internet, it must be true? What is Creator’s perspective?ClosedNicola asked 3 hours ago • Problems in Society4 views0 answers0 votesThe authors of The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want wrote: “With LLMs (large language models), the situation is even worse than garbage in/garbage out – they will make paper-mache out of their training data, mushing it up and remixing it into new forms that don’t preserve the communicative intent of original data. Paper-mache made out of good data is still paper-mache.” They also write: “This is why we like to call language models (like popular chatbots) ‘synthetic text extruding machines.'” They also write: “In the case of language modeling, the correct answer of which word came next is just whatever word happened to come next in the training corpus. … So if (popular chatbots) are nothing more than souped-up autocomplete, why are so many people convinced that it’s actually ‘understanding’ and ‘reasoning?'” Why indeed? What is Creator’s perspective?ClosedNicola asked 3 hours ago • Problems in Society2 views0 answers0 votesPropaganda has always been a huge problem, but may be an even bigger issue for AI. China and the Chinese Communist Party spend more money and effort, and engage more of its citizens to spread blatantly false propaganda, than perhaps the rest of the world combined. To such an extent that it felt the need to create its very own global social media platform, TikTok. The Trump administration has even proposed banning TikTok altogether because of the nefarious role the platform plays in both gathering intelligence and spreading propaganda. Some of the lies people are starting to believe about China, that it has no crime, that its infrastructure is some of the most advanced and safest in the world, that there are no homeless people in China, that everyone there has a meaningful and lucrative job, that they are the healthiest and happiest people on the planet, and on and on. When, in fact, the exact opposite is more often than not the case. And for every good lie they tell about themselves, they tell an equally bad one about America and Europe. The problem is, they are so prolific and extreme with this propaganda that the Chinese people themselves believe none of it (about themselves, anyway), and Americans and Europeans (especially young ones) are beginning to believe all of it. With AI having no way to filter this for truth or falsity other than volume, there appears to be a genuine danger of AI itself presenting this propaganda as gospel truth, that China is great and America and Europe are evil. What is Creator’s perspective?ClosedNicola asked 3 hours ago • Problems in Society6 views0 answers0 votesThe authors of AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference, suggest that the track record of AI used for predicting social outcomes is so abysmally bad that it may actually amount to fraud. They write: “In short, some existing limits to predictability could be overcome with more and better data, while others seem intrinsic (built in and unfixable). In some cases, such as cultural products (like resume scanning AI, or AI used to decide who gets social benefits), we don’t expect predictability to get much better at all. In others, such as predicting individuals’ life outcomes, there could be some improvements but not drastic changes. Unfortunately, this hasn’t stopped companies from selling AI for making consequential decisions about people by predicting their future. So it is important to resist AI snake oil that’s already in wide use today rather than passively hope that predictive AI technology will get better.” What is Creator’s perspective?ClosedNicola asked 3 hours ago • Problems in Society5 views0 answers0 votesThe authors of both books [The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want and AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference] were not the least bit concerned that AI presented an immediate or near-term existential threat to humanity in any way, shape, or form, despite copious media hype to the contrary. All the authors, on the other hand, were VERY concerned about the misuse of AI to reduce our freedom and agency to choose for ourselves, to retain the rights to our creative outputs, and even to have recourse when AI decides wrongly (which they assert it is guaranteed to do). Can Creator tell us how Empowered Prayer, the Lightworker Healing Protocol, Deep Subconscious Mind Reset, and Divine Life Support are the best ways to combat the danger and encroachment of AI in our lives?ClosedNicola asked 4 hours ago • Problems in Society3 views0 answers0 votesA viewer asks: “When the perpetrators go back in time in order to redo a period of time, does the aging of our bodies revert back to the state it was in at that previous time, or do our bodies continue aging through that process, thus aging twice as much?” What can Creator tell us?ClosedNicola asked 7 hours ago • Metaphysics11 views0 answers0 votesYou have told us there has not been an alien time travel to the past disrupting things within the past year, but there was one not too long before, in order to optimize preparation for their plans leading up to the desired Alien Disclosure deception. Is one of the limitations of this strategy, that even the alien AI systems are not very good at making future predictions, that they can see what changes might be needed to happen, making a return to the past desirable to enable re-using that time span, but will not know how long a period will be needed to implement the changes fully enough to change the future in the desired ways? In other words, using their time travel technology to manipulate our world more certainly toward a desired end result will inevitably introduce some additional uncertainty about the time it will require to implement the changes effectively?ClosedNicola asked 4 days ago • Metaphysics20 views0 answers0 votesWas this reporting in Scientific American accurate in its speculations: “Bird flu showed up on dairy farms and surprised everyone. How did bird flu jump to cows?” What can Creator tell us?ClosedNicola asked 5 days ago • Extraterrestrial Interlopers27 views0 answers0 votesA practitioner asks: “Have the interlopers implemented a time reset in the last year, or are we still experiencing the same timeline with their same plans for power outages, disclosure, and gold reset?” What can Creator tell us?ClosedNicola asked 5 days ago • Metaphysics65 views0 answers0 votes