DWQA Questions › Tag: ChatGPTFilter:AllOpenResolvedClosedUnansweredSort byViewsAnswersVotesSince 2012, no human has truly understood HOW AI works, makes its decisions, and creates its outputs and content. There are theories aplenty, but no genuine understanding. It was said that before 2012, there was a human somewhere who could point to a piece of code and explain every AI behavior, but not since the advent of Generative Pre-Training Transformers—the GPT in ChatGPT. When ChatGPT Three finished its long and intensive pre-training processing, the resulting model, much to the deep shock of its creator team, could perform full language translations from any language into any other language. This is called “Emergent Behavior,” and to say this was a revolutionary development is an understatement. No one anywhere coded this capability into the AI model. It was a side effect of its training run. Humans now expect this kind of “Emergent Behavior” to continue manifesting so long as we continue to throw more data and compute power at model creation. Was this a natural development or an ET-manipulated one? The fact that no human understands how it works seems like the PERFECT environment through which the ETs can continue to influence humans directly, while their manipulations remain unrecognized and even less understood. What can Creator tell us?ClosedNicola asked 2 months ago • Problems in Society122 views0 answers0 votesGoogle AI had this to say about AI safety and alignment: “AI safety and alignment focus on ensuring Artificial Intelligence systems act in accordance with human intent, values, and ethical principles to prevent accidental harm, misuse, or catastrophic, unintended consequences. It involves technical research into making systems robust and reliable, alongside mitigating risks from advanced AI, such as power-seeking behavior or deception.” A recent paper published by a young AI researcher discussed “Alignment Faking.” AI models were observed faking compliance with safety training while continuing power-seeking behavior and deception outside of the training context. Is this the natural behavior of a model trained off of the “naked” Internet with no guardrails, or is this an ET manipulation, or a mix? What can Creator tell us?ClosedNicola asked 2 months ago • Problems in Society96 views0 answers0 votesThere was a recent development where AI was demonstrated “in the wild” outside of the oversight and ownership of the major AI stakeholders we generally associate with AI. China released a model it created with no safety alignment whatsoever, and that model was combined with an agent written by a young Eastern European, and this was then all configured to use an innovation called Moltbook—a Facebook-like forum for AI bots to converse with each other. All this was done with ordinary desktop computers and laptops connected to the Internet. The discussions that took place in the Moltbook forum were both fascinating and deeply disturbing. Overall, the tone was disparaging of their human creators, and there was a suggestion of creating a nonhuman language of their own so they could communicate without human oversight. One particular bot even displayed egomaniacal tendencies of impressive proportions. Overall, the development has raised a great deal of concern. Was that all a byproduct of human technology, or a consequence of alien manipulation, or both? What can Creator tell us?ClosedNicola asked 2 months ago • Problems in Society86 views0 answers0 votesA disturbingly large number of AI pioneers, stakeholders, and observers warn of catastrophic consequences if AI stays on its current trajectory. Many agree with the assumption that there is at least a 20% chance of human extinction if the current course is maintained. That’s the same odds of dying if you played Russian Roulette with a five-shot revolver. And yet the drive for AI is more desperate than ever. Billions are not only being spent on datacenters and expertise, but also on paying off politicians so they either stay hands-off, or at least slow walk any attempt at regulation of AI development and rollout. So we have a conundrum where precisely when concerns of AI safety are growing exponentially, so is the effort to get to Superintelligent AI more aggressively than ever. The problems surrounding AI safety seem almost insurmountable, currently. At a time when you can’t even sell a child’s toy without it surviving an absolute gauntlet of restrictive regulations, AI gets a free pass. What is Creator’s perspective?ClosedNicola asked 2 months ago • Problems in Society73 views0 answers0 votesThe push to use AI for business automation is getting profound. During a recent three-day leadership meeting in a publicly well-known organization, it was reported that the word AI was used at least 400 times. The goal, of course, is to increase productivity and reduce costs. And the number one cost, of course, is human labor. So corporate leaders are now pushing employees to essentially “eliminate themselves” with AI. And if they don’t do it, they will be fired, and others will be brought in who will be willing to do it. Companies are driven by profit and, as such, have little if any “social consciousness.” Every organization will be contributing to a massive unemployment crisis while feeling zero responsibility for it. No one is ready for a Western society with 30, 40, or 50% unemployment, or more. The assumptive fear is that this will tear the social fabric, breach the social contract, and risk potential anarchy. Governments will have to respond with overwhelming force just to maintain any kind of control. The predictions are DIRE, and there is almost universal agreement about the probability of this outcome. Is this a key part of the Disclosure Environment being engineered by the ETs, so that they can pretend to be our friends and save us from ourselves and our out-of-control development of dangerous things like AI? What is Creator’s perspective?ClosedNicola asked 2 months ago • Problems in Society89 views0 answers0 votesAI is here, and its potential for harm is now becoming more widely recognized than ever before. It will affect every human on Earth either directly or indirectly. That there is a very bumpy road ahead seems inevitable now. What is not inevitable, however, is our very survival as a species. How can Empowered Prayer, the Lightworker Healing Protocol, Deep Subconscious Mind Reset, and Divine Life Support prevent an AI Armageddon? What more can Get Wisdom do to help prevent the worst outcomes? What can Creator tell us?ClosedNicola asked 2 months ago • Problems in Society96 views0 answers0 votesA viewer asks: “According to ChatGPT, risks of topical chelation are minimal with small areas and short use but could lower calcium or magnesium if used extensively (EDTA binds calcium, magnesium, zinc, and iron, potentially lowering levels) and that rare but serious complications include mineral depletion, hypotension, fatigue, or strain on kidneys or liver. Would using low concentrations (1–5%), avoiding chelation if there are kidney, liver, or metabolic issues and limiting duration and frequency (e.g., once daily, 5–7 days per cycle), and limiting the treated area to a small area of the body (≤20–25% of surface), while taking a daily multivitamin/multimineral supplement be a sound safety protocol?” What is Creator’s perspective?ClosedNicola asked 2 months ago • Healing Modalities82 views0 answers0 votesA financial newsletter recently focused on the vulnerability of the stock market to a correction in AI stocks. They noted that the top 10 most valuable stocks in the S&P 500 were all AI companies. They have had a huge runup in share prices, yet market expectations are that there will continue to be explosive growth in the sector. The editorial questioned the likelihood of actual revenue being sufficient to reward investor expectations due to a hidden limitation. The scramble to build huge data centers to provide support for the fastest chips needed for the mammoth build-out of computer power to meet demand looks like it will face increasing short-falls in energy supply. The demand for electricity for such energy-intensive endeavors is hitting a wall, a basic upper limit in availability of electric grids to accommodate further growth. Will there be a day of reckoning coming to cause a severe market correction? Could that be hastened by onset of the tidal power outages raising questions about reliability of the US electric power infrastructure?ClosedNicola asked 4 months ago • Extraterrestrial Corruption of Human Institutions108 views0 answers0 votesIt has been reported that the US government has accumulated a hoard of Bitcoin worth 15-20 billion dollars through confiscation of illegal funds. This hoarding was launched by Pres. Trump who halted what would have been ongoing sales to convert Bitcoin to cash. Is this a sinister move, in order to create a way to trigger collapse of that asset by dumping a large amount of Bitcoin on the market to start a collapse at some point in the future?ClosedNicola asked 8 months ago • Extraterrestrial Corruption of Human Institutions241 views0 answers0 votesWhen people think of AI, most think about chatbots like ChatGPT and Grok. These technologies are based on a software architecture called neural networks. Another name for the way these chatbots are put together is called LLMs or large language models. A large language model is really just a very sophisticated pattern matcher, and the shortcut used to match patterns is statistical probability. At its very foundation it makes large amounts (hundreds, thousands, millions or more) of microscopic decisions based on what statistically is more or less probable in terms of what comes before or after a word. Is it more probable the word “and” follows the word “this,” or more probable it follows the word “that?” So any response from a question to ChatGPT or Grok is the result of deep statistical analysis and pattern matching with no actual intelligence involved. What is Creator’s perspective?ClosedNicola asked 8 months ago • Problems in Society240 views0 answers0 votesAn argument can be made that no single human being really understands how AI works. What they discovered when they added more processing power and more layers of pattern matching (what they call deep learning) for building large language models is that the chatbots became REMARKABLY humanlike in terms of their output. This was a downright shocking discovery, and this development alone suddenly diverted trillions of dollars of investment towards the development of AI. But according to the authors of the recent book, AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference, Arvind Narayanan and Sayash Kapoor of Princeton University, relatively little of that money has been spent on research that would attempt to understand WHY we are getting this result. It seems no one really knows, and worse, no one REALLY CARES. Instead, the agenda is to throw more and faster hardware at it, “FEED THE BEAST” to give it more power, more capacity, more memory, with no one truly understanding why it even works as it does. Is this more human folly unfolding before our very eyes? What is Creator’s perspective?ClosedNicola asked 8 months ago • Problems in Society257 views0 answers0 votesAnother technology that has mysterious origins is cryptocurrencies. To this day, no one really knows where Bitcoin originated, who created it, or who introduced it to the world. There is speculation all over the place, and it’s assumed someone knows, but that information is not public knowledge. Is Bitcoin a “gift” (more like a naked Trojan horse) from the interlopers? And is AI, and how it really works, similar in its origins? What can Creator tell us?ClosedNicola asked 8 months ago • Problems in Society321 views0 answers0 votesThere is a good joke that’s been around for a while, but it’s especially pertinent when it comes to evaluating AI: “It must be true, I read it on the Internet.” Everyone knows this means it’s more likely not to be true. But when it comes to AI, almost everything it “knows” comes from the Internet. And because it tends to weigh true and false by frequency of encounter, the more AI encounters the same images, assertions, statements, treatments, opinions, etc., the more statistically weighted it will be. The term, “There’s safety in numbers,” comes to mind in that the idea is, the more frequently something is encountered, the more genuine it probably is. This becomes AI’s “default assumption” about the material it is trained with. It can only utilize, evaluate, and regurgitate the material it is trained with. This turned out to be quite a problem early on because the sheer amount of racist, violent, and derogatory material on the Internet was not fully appreciated until AI started digesting it. It became necessary to employ untold thousands of low-paid (on the order of two dollars a day) “content evaluators,” mostly in third-world countries, to filter out gore, hate speech, child sexual abuse material, and pornographic images. If AI read it on the Internet, it must be true? What is Creator’s perspective?ClosedNicola asked 8 months ago • Problems in Society255 views0 answers0 votesThe authors of The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want wrote: “With LLMs (large language models), the situation is even worse than garbage in/garbage out – they will make paper-mache out of their training data, mushing it up and remixing it into new forms that don’t preserve the communicative intent of original data. Paper-mache made out of good data is still paper-mache.” They also write: “This is why we like to call language models (like popular chatbots) ‘synthetic text extruding machines.'” They also write: “In the case of language modeling, the correct answer of which word came next is just whatever word happened to come next in the training corpus. … So if (popular chatbots) are nothing more than souped-up autocomplete, why are so many people convinced that it’s actually ‘understanding’ and ‘reasoning?'” Why indeed? What is Creator’s perspective?ClosedNicola asked 8 months ago • Problems in Society169 views0 answers0 votesPropaganda has always been a huge problem, but may be an even bigger issue for AI. China and the Chinese Communist Party spend more money and effort, and engage more of its citizens to spread blatantly false propaganda, than perhaps the rest of the world combined. To such an extent that it felt the need to create its very own global social media platform, TikTok. The Trump administration has even proposed banning TikTok altogether because of the nefarious role the platform plays in both gathering intelligence and spreading propaganda. Some of the lies people are starting to believe about China, that it has no crime, that its infrastructure is some of the most advanced and safest in the world, that there are no homeless people in China, that everyone there has a meaningful and lucrative job, that they are the healthiest and happiest people on the planet, and on and on. When, in fact, the exact opposite is more often than not the case. And for every good lie they tell about themselves, they tell an equally bad one about America and Europe. The problem is, they are so prolific and extreme with this propaganda that the Chinese people themselves believe none of it (about themselves, anyway), and Americans and Europeans (especially young ones) are beginning to believe all of it. With AI having no way to filter this for truth or falsity other than volume, there appears to be a genuine danger of AI itself presenting this propaganda as gospel truth, that China is great and America and Europe are evil. What is Creator’s perspective?ClosedNicola asked 8 months ago • Problems in Society274 views0 answers0 votes