DWQA Questions › Tag: misdirectionFilter:AllOpenResolvedClosedUnansweredSort byViewsAnswersVotesYou have said you would not recommend fenbendazole for treating or preventing cancer, but would support use of ivermectin and hydroxychloroquine. Dr. Andreas Kalcker promotes treatment of cancer using Chlorine Dioxide Solution to first reach a clinical redox plateau, then introduces albendazole/fenbendazole to impose mitotic stress, and then adds ivermectin to quiet excitatory signaling. In this scenario, the fenbendazole is used to block glucose handling by cancer cells to limit energy availability, and destabilize microtubules to hinder cell division. Is his theory sound? Is his timed sequence a more elegant way to utilize fenbendazole effectively to derive benefit?ClosedNicola asked 1 week ago • Healing Modalities41 views0 answers0 votesA practitioner asks: “An article [on the Internet] discusses the potential of fenbendazole, a common antiparasitic drug used in veterinary medicine, as a treatment for cancer. Fenbendazole (FBZ) has gained attention due to anecdotal reports suggesting it may have anticancer properties. FBZ may also enhance the efficacy of traditional cancer treatments like chemotherapy and radiation. FBZ though has no clinical literature as an anti-cancer treatment. How likely is it that FBZ could be a safe and effective anti-cancer treatment given that there are no treatment protocols, dosage, or side effect knowledge?”ClosedNicola asked 1 week ago • Healing Modalities111 views0 answers0 votesIn the book, The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, co-authors Emily Bender and Alex Hanna argue that the term AI (acronym for Artificial Intelligence) is marketing hype. Google defines the word hype as “promote or publicize (a product or idea) intensively, often exaggerating its importance or benefits.” The implication is that without the exaggerated claim of benefit, and if people knew what they were REALLY getting with widespread adoption of these technologies bundled under the AI moniker, they quite likely would reject the product or idea altogether. The other pertinent question is, benefit to WHOM? Does the average consumer really benefit more than the cost imposed and the harm potentially incurred? The authors argue NO, the use of the term AI is really a bait and switch for increased AUTOMATION across the board. Automation that will decrease the demand for labor and remove human judgment from decision-making and categorizing. It will end up benefiting the ownership and finance classes at the expense of everyone else. What is Creator’s perspective?ClosedNicola asked 1 week ago • Problems in Society28 views0 answers0 votesThe term AI and Artificial Intelligence suddenly became relevant in the 2010s with the fortuitous adoption of chip technology designed to solve an entirely different problem, namely presenting complex and fast-changing graphics on computer screens, used mostly to make video games more realistic and lifelike. A little more than a decade ago, a small company named Nvidia made a graphics processor for making computer video a LOT faster. Today, it’s a trillion-dollar company because that processor was successfully adapted for AI processing with little modification. Once this discovery was made, untold TRILLIONS of dollars have been poured into making billions of these chips. Massive data centers are being built to utilize them, requiring vast amounts of resources and electricity. AI was less a software innovation than it was a hardware innovation. At the end of the day, these chips are overwhelmingly “number crunchers,” not much different in base functionality than an electronic calculator, only vastly miniaturized for speed and scaled up for volume. Is it fair to say that AI is really just a vast “calculator” when one tries to grasp how it REALLY works? What is Creator’s perspective?ClosedNicola asked 1 week ago • Problems in Society68 views0 answers0 votesWhen people think of AI, most think about chatbots like ChatGPT and Grok. These technologies are based on a software architecture called neural networks. Another name for the way these chatbots are put together is called LLMs or large language models. A large language model is really just a very sophisticated pattern matcher, and the shortcut used to match patterns is statistical probability. At its very foundation it makes large amounts (hundreds, thousands, millions or more) of microscopic decisions based on what statistically is more or less probable in terms of what comes before or after a word. Is it more probable the word “and” follows the word “this,” or more probable it follows the word “that?” So any response from a question to ChatGPT or Grok is the result of deep statistical analysis and pattern matching with no actual intelligence involved. What is Creator’s perspective?ClosedNicola asked 1 week ago • Problems in Society65 views0 answers0 votesAn argument can be made that no single human being really understands how AI works. What they discovered when they added more processing power and more layers of pattern matching (what they call deep learning) for building large language models is that the chatbots became REMARKABLY humanlike in terms of their output. This was a downright shocking discovery, and this development alone suddenly diverted trillions of dollars of investment towards the development of AI. But according to the authors of the recent book, AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference, Arvind Narayanan and Sayash Kapoor of Princeton University, relatively little of that money has been spent on research that would attempt to understand WHY we are getting this result. It seems no one really knows, and worse, no one REALLY CARES. Instead, the agenda is to throw more and faster hardware at it, “FEED THE BEAST” to give it more power, more capacity, more memory, with no one truly understanding why it even works as it does. Is this more human folly unfolding before our very eyes? What is Creator’s perspective?ClosedNicola asked 1 week ago • Problems in Society56 views0 answers0 votesAnother technology that has mysterious origins is cryptocurrencies. To this day, no one really knows where Bitcoin originated, who created it, or who introduced it to the world. There is speculation all over the place, and it’s assumed someone knows, but that information is not public knowledge. Is Bitcoin a “gift” (more like a naked Trojan horse) from the interlopers? And is AI, and how it really works, similar in its origins? What can Creator tell us?ClosedNicola asked 1 week ago • Problems in Society70 views0 answers0 votesThere is a good joke that’s been around for a while, but it’s especially pertinent when it comes to evaluating AI: “It must be true, I read it on the Internet.” Everyone knows this means it’s more likely not to be true. But when it comes to AI, almost everything it “knows” comes from the Internet. And because it tends to weigh true and false by frequency of encounter, the more AI encounters the same images, assertions, statements, treatments, opinions, etc., the more statistically weighted it will be. The term, “There’s safety in numbers,” comes to mind in that the idea is, the more frequently something is encountered, the more genuine it probably is. This becomes AI’s “default assumption” about the material it is trained with. It can only utilize, evaluate, and regurgitate the material it is trained with. This turned out to be quite a problem early on because the sheer amount of racist, violent, and derogatory material on the Internet was not fully appreciated until AI started digesting it. It became necessary to employ untold thousands of low-paid (on the order of two dollars a day) “content evaluators,” mostly in third-world countries, to filter out gore, hate speech, child sexual abuse material, and pornographic images. If AI read it on the Internet, it must be true? What is Creator’s perspective?ClosedNicola asked 1 week ago • Problems in Society61 views0 answers0 votesThe authors of The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want wrote: “With LLMs (large language models), the situation is even worse than garbage in/garbage out – they will make paper-mache out of their training data, mushing it up and remixing it into new forms that don’t preserve the communicative intent of original data. Paper-mache made out of good data is still paper-mache.” They also write: “This is why we like to call language models (like popular chatbots) ‘synthetic text extruding machines.'” They also write: “In the case of language modeling, the correct answer of which word came next is just whatever word happened to come next in the training corpus. … So if (popular chatbots) are nothing more than souped-up autocomplete, why are so many people convinced that it’s actually ‘understanding’ and ‘reasoning?'” Why indeed? What is Creator’s perspective?ClosedNicola asked 1 week ago • Problems in Society19 views0 answers0 votesPropaganda has always been a huge problem, but may be an even bigger issue for AI. China and the Chinese Communist Party spend more money and effort, and engage more of its citizens to spread blatantly false propaganda, than perhaps the rest of the world combined. To such an extent that it felt the need to create its very own global social media platform, TikTok. The Trump administration has even proposed banning TikTok altogether because of the nefarious role the platform plays in both gathering intelligence and spreading propaganda. Some of the lies people are starting to believe about China, that it has no crime, that its infrastructure is some of the most advanced and safest in the world, that there are no homeless people in China, that everyone there has a meaningful and lucrative job, that they are the healthiest and happiest people on the planet, and on and on. When, in fact, the exact opposite is more often than not the case. And for every good lie they tell about themselves, they tell an equally bad one about America and Europe. The problem is, they are so prolific and extreme with this propaganda that the Chinese people themselves believe none of it (about themselves, anyway), and Americans and Europeans (especially young ones) are beginning to believe all of it. With AI having no way to filter this for truth or falsity other than volume, there appears to be a genuine danger of AI itself presenting this propaganda as gospel truth, that China is great and America and Europe are evil. What is Creator’s perspective?ClosedNicola asked 1 week ago • Problems in Society71 views0 answers0 votesThe authors of AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference, suggest that the track record of AI used for predicting social outcomes is so abysmally bad that it may actually amount to fraud. They write: “In short, some existing limits to predictability could be overcome with more and better data, while others seem intrinsic (built in and unfixable). In some cases, such as cultural products (like resume scanning AI, or AI used to decide who gets social benefits), we don’t expect predictability to get much better at all. In others, such as predicting individuals’ life outcomes, there could be some improvements but not drastic changes. Unfortunately, this hasn’t stopped companies from selling AI for making consequential decisions about people by predicting their future. So it is important to resist AI snake oil that’s already in wide use today rather than passively hope that predictive AI technology will get better.” What is Creator’s perspective?ClosedNicola asked 1 week ago • Problems in Society22 views0 answers0 votesThe authors of both books [The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want and AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference] were not the least bit concerned that AI presented an immediate or near-term existential threat to humanity in any way, shape, or form, despite copious media hype to the contrary. All the authors, on the other hand, were VERY concerned about the misuse of AI to reduce our freedom and agency to choose for ourselves, to retain the rights to our creative outputs, and even to have recourse when AI decides wrongly (which they assert it is guaranteed to do). Can Creator tell us how Empowered Prayer, the Lightworker Healing Protocol, Deep Subconscious Mind Reset, and Divine Life Support are the best ways to combat the danger and encroachment of AI in our lives?ClosedNicola asked 1 week ago • Problems in Society21 views0 answers0 votesAdvanced Bionutritionals recently announced sale of Thyrovanz which contains 100 mg of bovine thyroid glandular powder. This natural ingredient includes thyroid hormones T1, T2, T3, T4, and Calcitonin, which supports healthy thyroid function. Is the only advantage of Thyrovanz that it will better help the 15% of hypothyroid patients who have abnormal conversion of T4 to T3 than would be done using the Armour Thyroid extract, or does Thyrovanz also deliver a healthier balance or more effective supplementation of endogenous hormone production?ClosedNicola asked 1 week ago • Healing Modalities67 views0 answers0 votesIs Thyrovanz, an extract of bovine thyroid glands containing thyroid hormones T4, T3, T2, T1, and Calcitonin, a safe and effective hormonal source for reversing hypothyroidism? What percent of those with low thyroid function would be helped significantly to regain their energy and resolve their other symptoms?ClosedNicola asked 1 week ago • Healing Modalities74 views0 answers0 votesIs Thyrovanz more effective than Synthroid (levothyroxine, a synthetic form of thyroxine)? Does it suppress natural thyroid hormone production like Synthroid, creating the need for lifelong administration?ClosedNicola asked 1 week ago • Healing Modalities19 views0 answers0 votes