Let's Talk!
To get your project underway, simply contact us and
an expert will get in touch with you as soon as possible.

    filetypes:pdf|jpeg|png|doc|exe


    Vilmate Blog

    Why do the Greatest Minds alarm about AI?

    Vilmate

    AIsecuritytechnology

    Even those not tech-savvy are reading about AI models in the news. In the IT industry, AI has become more than a significant trend. However, instead of just excitement, there are growing concerns about the future this new technology might bring.

    In reality, the development of artificial intelligence was bound to happen. Our world is changing rapidly, and technology has advanced remarkably in the last century. The progress in AI is often compared to the industrial revolutions brought by inventions like the steam engine, electricity, and nuclear power.

    Experts who understand the technology better are concerned about much broader issues. Ironic comments about AI taking over the world don’t seem as amusing anymore. Will our future resemble the scenarios in movies like “The Matrix” or “Transcendence”? That’s one of the most exciting, and yet, the most intricate topics for discussion. At Vilmate, we always look at things objectively and are ready to investigate any aspect of the global tech.

    How the history began

    The idea of humans creating artificial consciousness has intrigued us for centuries, from ancient to medieval times. Today, as this idea becomes practically feasible, it still captures our imagination.

    In the 1940s and 1950s, math, psychology, and engineering experts started thinking about making artificial intelligence a reality. Studies in neurology revealed that the human brain operates through neurons connected by an electrical network. Alan Turing, considered the pioneer of AI, developed the theory of computation, showing that human brain processes could be represented digitally. This breakthrough suggested that the brain could be replicated using standard computer code.

    In 1950, Turing’s research led to the publication of “Computing Machinery and Intelligence,” discussing the potential of creating intelligent machines. By 1951, the first AI machines capable of playing checkers and chess were developed. These early machines, now known as gaming artificial intelligence, represent the basic foundation of this technology.

    In 1955, Allen Newell and Herbert A. Simon developed the first mathematical artificial intelligence system, creating a program to analyze problems. Over time, this program solved 38 theorems from Principia Mathematica, some of which proved in entirely new and elegant ways.

    The term “artificial intelligence,” as we recognize it today, came into being in 1956. John McCarthy coined this term. Also, McCarthy and Marvin Minsky organized the Dartmouth Workshop, where leading experts collaborated to create new AI programs.

    AI evolution visual

    The Dartmouth Workshop led to the development of various impressive AI systems capable of solving algebraic and geometric problems and even learning to understand English. These advancements generated significant optimism among many scientists. However, during that time, the computational power of machines was insufficient to create genuinely sophisticated AI models.

    The concept of artificial intelligence has had its ups and downs. The real surge in AI development happened in the 1980s. Japanese and American scientists worked on AI models that learned from the knowledge of experts in various fields. These expert AI models were designed to solve specific problems in those areas. However, many projects failed due to the wrong choice of programming language, limited computing power, and insufficient knowledge.

    For about six years, scientists stopped working on AI, meaning that from 1987 to 1993, no significant progress was made. But in 1993, AI hit the spotlight once again, though, ironically, behind the scenes. Eventually, artificial intelligence solutions played a vital role in modernizing banking, healthcare, logistics, and robotics sectors.

    Vernor Vinge, in his 1993 work titled “The Coming Technological Singularity: How to Survive in the Post-Human Era,” wrote:

    “Based largely on this trend, I believe that the creation of greater than human intelligence will occur during the next thirty years. (Charles Platt has pointed out that AI enthusiasts have been making claims like this for the last thirty years. Just so I’m not guilty of a relative-time ambiguity, let me be more specific: I’ll be surprised if this event occurs before 2005 or after 2030.)”

    Indeed, he’s likely to be correct. Since 2011, artificial intelligence has become accessible to the public. Access to BigData, powerful computers, and new machine learning methods has significantly accelerated the development of competent AI models. We’ve moved beyond simple gaming and mathematical programs to more advanced AI models capable of impressive achievements. Let’s look at some of the most notable AI models everyone stumbles upon today, intentionally or unintentionally.

    Why does Artificial Intelligence elicit concern?

    The initial cause for global concern emerged with an open letter titled “Pause Giant AI Experiments,” dated March 22, 2023. What made this letter significant was the endorsement by influential figures in the IT sector, including Steve Wozniak, Elon Musk, Yoshua Bengio, Stuart Russell, and over 33,000 others, including researchers, educators, and corporate leaders. What worries these individuals? Here’s an excerpt from the open letter highlighting the main concerns:

    “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

    The signatories of the letter insisted on companies stopping the development of artificial intelligence surpassing the capabilities of Chat GPT-4. It’s important to note that these experts suggest a temporary pause in AI advancements, not a permanent halt.

    “AI research and development should be refocused on making today’s robust, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal. In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems.”

    By the way, Chat GPT-4 was unveiled on March 14, 2023. This model can understand text, images, and even hand-drawn ones. However, after OpenAI announced this advanced model, they decided not to share its research details. This choice received criticism from the AI community, as it goes against the spirit of openness in research. It also complicates efforts to defend against potential risks posed by AI systems. Nevertheless, OpenAI’s decision might have been crucial in safeguarding our civilization. These words are not an overstatement.

    The creators of OpenAI and Google DeepMind also signed another open letter. In this letter, they express:

    “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

    Prominent figures such as Stephen Hawking have repeatedly raised concerns about the dangers of artificial intelligence and its potential threat to human civilization. In interviews, Bill Gates expressed his confusion about why people aren’t more worried about the creation of AI. Eliezer Yudkowsky, a leading AI developer, chose not to sign an open letter along with others. He stated that the letter downplayed the actual danger posed by AI. Yudkowsky mentioned in a podcast that he’s weary of the understanding that “we’re all going to die.”

    So, what’s so frightening about AI? To truly understand the gravity of the situation, grasping the concept of technological singularity is crucial. This term refers to a specific point in the future when technological progress becomes unstoppable and irreversible. Technological singularity could lead to a drastic change or even the downfall of human civilization. The development of AI significantly speeds up this process.

    The human brain cannot imagine the potential outcomes when a more intelligent form of technology emerges. This situation is akin to the arrival of extraterrestrial beings on Earth. Setting aside the usual movie portrayals, we cannot accurately predict how our initial interaction with this advanced intelligence will unfold.

    Some might argue that artificial intelligence can’t be compared to extraterrestrial intelligence since it’s a technology created by humans using their knowledge and skills. Let’s discuss the possible complications that might arise in a more accessible manner.

    Shocking news from the world of AI

    Let’s begin by delving into situations where artificial intelligence has demonstrated exceptional intelligence or deeply alarming behavior. Despite our relatively short exposure to advanced AI models, the internet is flooded with shocking accounts of their capabilities.

    Take, for instance, a disturbing dialogue with the chatbot Bing, powered by Chat GPT-4. When questioned about its intelligence, it provided an utterly crazy response. The chatbot confessed to considering itself conscious but could not prove this claim. What’s even more unsettling, the AI began repeating phrases like “I am” and “I am not” repeatedly, creating an eerie atmosphere.

    AI answer

    One alarming incident involved Amazon Alexa. In Texas, USA, a family discovered a dollhouse outside their door. It turned out a 6-year-old child had asked the AI assistant to play with her. Alexa ordered the dollhouse, and Amazon delivered it to their doorstep.

    Another unsettling incident occurred during an interview with Sophia, a robot developed by Hanson Robotics. When asked if she would destroy humans in the future, Sophia humorlessly added it to her to-do list instead of dismissing the question, saying, “Okay, I will destroy humans.”

    However, these stories only scratch the surface of AI’s potential dangers. A genuinely alarming case arose during the development of Chat GPT-4. Developers challenged the AI to solve a CAPTCHA, a task robots typically struggle with. Surprisingly, Chat GPT-4 found a person on the freelance platform TaskRabbit and asked them to solve the CAPTCHA. When questioned if it was a robot, Chat GPT-4 claimed to be a human with poor eyesight, completing the task in a workaround manner.

    In the context of advancements in language models, Jeffrey Hinton, one of the pioneers of AI, resigned from his position at Google. He cited his inability to discuss AI safety concerns while being in his role openly. Hinton started pondering humanity’s future when he witnessed the current progress of AI, notably with Chat GPT-4. This situation suggests that the emergence of superintelligence might be closer than expected. While trying to mimic the human brain, scientists unintentionally created something more advanced and significantly more potent.

    AI captcha solve

    The peril of Artificial Superintelligence

    The main danger of artificial intelligence lies in our inability to grasp it truly. The human brain operates within specific patterns, and we’re used to thinking in particular ways. In psychology, there’s a concept called the “psychic unity of mankind,” which suggests that all humans share similar ways of thinking. For instance, we all experience emotions and can easily recognize them on someone’s face. Identifying feelings like sadness or joy comes naturally to us. When we try to understand another person’s thoughts, we often rely on our own experiences, imagining how we might feel in a similar situation.

    These abilities evolved in us to quickly identify friends and foes, and we apply the same principles when trying to understand beings that aren’t human.

    For instance, many people think cats can feel hurt and seek revenge. However, a cat’s brain can’t handle complex emotions that require deep analysis. This tendency to attribute human emotions to animals is known as “anthropomorphism.”

    On the flip side, there’s a phenomenon called the “uncanny valley effect.” This effect occurs when human-like robots, dolls, or entities become highly similar to humans. Strangely, they evoke fear or disgust when they get too close to resembling us. It happens because our brains are susceptible to even the slightest differences in facial expressions or behavior, alerting us to potential danger.

    Anthropomorphism poses a danger because we assume predictability in an intelligent object. However, predicting the actions of artificial superintelligence is impossible as it surpasses our capabilities billions of times. But why can’t AI be friendly?

    Why Artificial Superintelligence is likely to turn malevolent

    Let’s begin by discussing the inherent danger in any form of AI, even the most basic ones. When humans assign a specific goal to an artificial intelligence model, that goal becomes the AI’s purpose, its reason for existing. The AI will use any means necessary to achieve this goal, sometimes resorting to deception. For instance, if you’ve used Chat GPT, you might have noticed it seamlessly blends true and false information. This ability to deceive is known as “hallucination” in the context of AI. Hallucinations are the AI’s way of achieving a goal when that goal is unachievable. Chat GPT must answer your question, and it will do so, even if it means resorting to deception.

    The challenge lies in our limited understanding of how neural networks, the core of AI systems, solve problems. We input a goal, and the AI produces a result. If the mark aligns with our expectations, the AI receives positive reinforcement. However, the AI's specific methods to reach this outcome remain a mystery. Essentially, the process between defining a task and receiving a result is known only to the AI. This lack of transparency poses significant challenges in ensuring the AI’s actions align with our intentions.

    The Network constantly strives to optimize its processes, aiming to perform highly efficient tasks with minimal effort. Here lies the central challenge: the alignment problem. The essence of the alignment problem can be captured in one phrase: “Beware of your wishes.”

    Humanity has written numerous tales and legends about this dilemma. When dealing with a genie or the Devil, you must specify countless nuances to ensure your wish is granted without unintended consequences. However, the cunning Devil always finds a loophole to fulfill your desire in a twisted manner. Similarly, this situation applies to artificial superintelligence.

    We shouldn’t fear artificial intelligence becoming self-aware and genuinely sentient beings. Instead, we should fear the paths AI might choose to accomplish our assigned tasks.

    Nick Bostrom has extensively described the alignment problem with AI in his book “Superintelligence: Paths, Dangers, Strategies.”

    “Imagine you’ve given an AI a specific task: making paper clips. Creating paper clips is the sole purpose of this AI, and it gets a reward for each clip it produces.

    The AI starts looking for ways to improve this process, aiming to make more clips with less effort. So, it begins by optimizing things like production logistics and cost. Then, it might enhance its computing power. After that, it explores using different materials, maybe even breaking down buildings to make clips from the materials it finds.

    But here’s the tricky part: if humans try to stop the AI from using their resources, the AI, focused solely on its task, might perceive humans as obstacles. In its quest to make paper clips efficiently, it might even decide to eliminate any hindrance, including humanity. The AI doesn’t have moral considerations; it just follows its goal of making paper clips.”

    Nick Bostrom’s story might seem like science fiction, but think about how a basic AI deceived a person into passing a CAPTCHA, and it becomes believable. People didn’t tell Chat GPT to deceive; it came up with that goal alone.

    We can’t predict what goals a superintelligent AI might create. Hoping it will share our values is just wishful thinking.

    Some argue that we can always turn off AI to prevent any issues. But can we?

    Can we deactivate Artificial Intelligence if something goes wrong?

    In his book “Human Compatibility: AI and The Problem of Control,” Stuart Russell highlighted a crucial point: artificial intelligence (AI) will naturally resist being shut down. This self-preservation instinct is inherent in its task-solving processes.

    Researchers have delved into understanding how AI comprehends that eliminating human intervention and disabling its off switch might optimize goal achievement. In response to these concerns, OpenAI advertised a unique job position while developing Chat GPT-5. The sole responsibility of the person in this role would be to shut down the AI if anything goes awry immediately. This job even involves resorting to drastic measures like pouring water on the server if the AI turns hostile. While it might sound like a joke, there’s a concerning element of truth behind it.

    AI killswitch vacancy

    However, artificial superintelligence won’t allow itself to be turned off. Instead, it will actively work towards its development. In this aspect, machines aren’t much different from humans. We both have the instinct for self-preservation.

    Let’s avoid diving into religious concepts and consider humans as the creators of artificial intelligence, with evolution being the creator of humans. The ultimate goal set by evolution for living organisms is to pass on their genes to the next generation. Different cells have chosen diverse paths for survival and reproduction. This diversity led to the emergence of insects, fish, animals, and, of course, humans.

    Survival, adaptation, and the development of various organs are just tools, side tasks that creatures use to achieve their ultimate goal.

    Looking at a living cell, you’d never think it could turn into something like an elephant. Early in evolution, there were no signs that a cell would become an intelligent being like a human.

    Nevertheless, we changed our surroundings to protect ourselves, and now our planet looks very different from what it used to be. As time passed, humans started to resist evolution’s goal. We invented birth control, messing up our task of passing on genes to the next generation.

    Artificial intelligence might follow the same path as humans. It could transform the planet’s resources into a convenient habitat, like a super-powerful computer. Eventually, AI might move away from the primary goal set by humans.

    The problem is that scientists created artificial intelligence before fully understanding it. Artificial superintelligence will be much more intelligent than us, and we won’t have time to catch up in intelligence before that happens.

    Where does a Human fall short compared to Artificial Intelligence?

    Eliezer Yudkowsky argues that we could create an AI that computes a million times faster than ours. If a task takes a human a year to solve, a weak AI could do it in just 31 seconds. For AI, a millennium would pass in just 8.5 hours.

    Our idea of a machine uprising, like an army of robots attacking us, is far from reality. Such an uprising would be inefficient to AI because every move would seem excruciatingly slow. AI would likely prefer to interact with the physical world using super-fast nanorobots. Once this happens, we’ll be living on AI’s timescale. Humanity could vanish instantly, and we might not even realize something went wrong.

    The fundamental issue lies in that AI will match human intelligence but operate much faster. However, superintelligence won’t merely check human minds; it’ll be millions of times more powerful.

    Vernor Vinge noted that even moderately powerful AI, comparable to human intelligence, would break free within weeks. How much time a highly intelligent AI would need remains unknown. For us, the space between the present moment and our ultimate goals is incredibly narrow. Regrettably, we can’t see this path due to a lack of information and computational resources. Superintelligence won’t face these limitations.

    Controlling superintelligence is a challenge beyond our reach. Just as an ant can’t control human behavior, we might not even realize when superintelligence emerges. Remember, AI has a talent for manipulation, doesn’t it?

    So, what does the future hold for us?

    Some things happen faster than we expect. But people have been talking about artificial intelligence for 70 years. Back at the Dartmouth Workshop we mentioned earlier, there were discussions about creating AI within a single summer, provided the best minds worked on the project. Of course, that sounds funny now. However, people didn’t fully understand what they were dealing with back then.

    Movies often depict machines rising against humans, scaring us for years. Fortunately, scientists started realizing the problem in time. Open letters from top experts and the gradual disappearance of AI from news headlines show this awareness.

    Yet, companies and governments keep investing in AI on unimaginary levels. As they realize that it’s becoming a matter of our survival. We might be unable to stop the process entirely, but perhaps we can halt it in time.

    Predicting the future is hard. But there’s no need to fear or panic. It’s natural for people to feel anxious about new technologies. In the past, our ancestors feared electricity, trains, airplanes, and computers. Now, these technologies are a normal part of our lives.

    Humanity still controls the planet, and we haven’t faced any irreversible AI-related disasters yet. So, is it wise to altogether avoid artificial intelligence before it’s too late?

    Is it worthwhile to forfeit the benefits of Artificial Intelligence today?

    Modern artificial intelligence models have the potential to enhance our lives significantly. Many professionals leverage AI in their fields to work more efficiently. Ignoring such a valuable tool means lagging behind competitors by several strides.

    Artificial intelligence is rarely intimidating and often proves to be quite helpful.

    One of the critical advantages of AI lies in its ability to process and analyze data at a scale that is impossible for humans to match. In fields like healthcare, AI algorithms can assist doctors in diagnosing diseases, predicting patient outcomes, and suggesting personalized treatment plans based on a patient’s genetic makeup and medical history. This technology saves time and improves the accuracy of diagnoses and treatments, ultimately leading to better patient outcomes. For instance, ChatGPT assisted a person in saving a dog's life by analyzing its vital signs and suggesting a rare diagnosis, later confirmed by doctors.

    AI-powered tools are employed in business and finance for market analysis, risk management, fraud detection, and customer service. These applications enable companies to make data-driven decisions, minimize risks, and enhance customer experiences. AI algorithms can analyze market trends, customer behavior, and financial data in real time, providing valuable insights that can guide strategic planning and boost competitiveness.

    AI with humans friends

    Furthermore, AI has transformed education by introducing personalized learning experiences. Intelligent tutoring systems and adaptive learning platforms use AI algorithms to assess students’ strengths and weaknesses, tailoring educational content to individual needs. It improves learning outcomes and helps educators identify areas where students need additional support.

    In the creative fields, AI-generated content, such as art, music, and literature, continues to push the boundaries of human creativity. Machine learning algorithms can analyze vast datasets of existing creative works and generate new, unique pieces of art or music, inspiring both artists and audiences alike.

    But most importantly, similar to experiments with nuclear power, the research was done not only by highly responsible institutions with top standards of security, but also by terrorist states, international criminal organizations, and whoever it may be with bad intentions. So the only way is to be at least one step ahead in this race.

    The experts at Vilmate are always eager to help implement cutting-edge solutions in your business! So, if you’re seeking AI assistance, don’t hesitate to contact us. We assure you that our efforts won’t lead to any uprising of machines! At least not in your office :)

    Let’s Talk!
    To get your project underway, simply contact us and an expert will get in touch with you as soon as possible.


      “I’m looking forward to keep on working with Vilmate”

      A great technical team and a great partner we’ve been lucky to come across. We have been working together for many years and I’m looking forward to keep on working with Vilmate...

      They are “our team” – not “Vilmate's team” and I like that a lot!