WHAT IS ARTIFICIAL INTELLIGENCE?

Artificial intelligence (AI) is fast evolving, from Siri to self-driving automobiles. While AI is often depicted in science fiction as humanoid robots, it may refer to anything from Google’s search engines to IBM’s Watson to autonomous weaponry.

Today’s artificial intelligence is referred to as narrow AI (or weak AI) since it is built to execute a certain purpose (e.g. only facial recognition or only internet searches or only driving a car). Many researchers’ long-term objective, however, is to develop generic AI (AGI or strong AI). While narrow AI may surpass humans in a single skill, such as playing chess or solving math problems, AGI would exceed humans in practically every cognitive endeavor.

WHY ARE YOU INTERESTED IN AI SAFETY RESEARCH?

In the short term, the objective of limiting AI’s negative influence on society stimulates study in a variety of fields, ranging from economics and law to technological concerns like verification, validity, security, and control. If your laptop fails or is hacked, it’s all the more vital that an AI system performs what you want it to do if it’s in charge of your automobile, aircraft, pacemaker, automated trading system, or power grid. Preventing a deadly arms race in lethal autonomous weapons is another short-term problem.

In the long run, a key concern is what would happen if the drive for strong AI succeeds and AI systems outperform humans in all cognitive activities. Designing superior AI systems, as I.J. Good pointed out in 1965, is a cognitive endeavor in and of itself. Such a system might theoretically undergo cyclical self-improvement, resulting in an intelligence explosion that would far above human intelligence. Such a superintelligence might help mankind erase war, sickness, and hunger by developing breakthrough new technologies, and hence the development of powerful AI could be the most significant event in human history. However, other scientists are concerned that it may be the last, unless we can learn to match the AI’s aims with ours before it becomes superintelligent.

Some argue that strong AI will never be realized, while others argue that the development of superintelligent AI would always be good. At FLI, we acknowledge both of these possibilities, as well as the potential for an artificial intelligence system to wreak significant damage, whether purposefully or accidentally. We think that current research will aid us in better preparing for and preventing such potentially undesirable repercussions in the future, allowing us to reap the advantages of AI while avoiding the pitfalls.

HOW CAN ARTIFICIAL INTELLIGENCE BE DANGEROUS?

Most scientists think that a superintelligent AI is unlikely to experience human emotions like love or hatred, and that there is no reason to believe that AI would become purposely good or bad. Instead, scientists believe that two scenarios are most plausible when it comes to AI being a risk:

The AI has been designed to do a heinous act: Artificial intelligence systems that are trained to kill are known as autonomous weapons. These weapons might potentially result in huge fatalities if they fall into the wrong hands. Furthermore, an AI weapons race might unintentionally lead to an AI war with catastrophic victims. To prevent being foiled by the adversary, these weapons would be engineered to be incredibly difficult to “switch off,” allowing humans to lose control in a circumstance like this. This danger exists even with limited AI, but it becomes more prevalent as AI intelligence and autonomy rise.

When we fail to perfectly match the AI’s aims with ours, which is very difficult, the AI creates a harmful means for attaining its purpose. If you order an obedient intelligent automobile to bring you to the airport as quickly as possible, it may get you there pursued by helicopters and covered in vomit, doing exactly what you asked for. If a superintelligent machine is charged with a large-scale geoengineering project, it may cause chaos in our biosphere as a side consequence, and see human efforts to halt it as a danger that must be defeated.

As these examples show, the main issue concerning advanced AI is competency rather than malice. A super-intelligent AI will excel at achieving its objectives, and if those objectives aren’t the same as ours, we’ll have a problem. You’re probably not an awful ant-hater who deliberately walks on ants, but if you’re in charge of a hydroelectric green energy project and an anthill is in the area that will be flooded, the ants will suffer. One of the main goals of AI safety research is to never put humans in the same situation as those ants.

See also  The Pinnacle of Possibility: Science's New Era of Enlightenment

WHAT IS THE REASON FOR THE RECENT INTEREST IN AI SAFETY?

Many major AI experts have joined Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and other significant names in science and technology in expressing alarm about the hazards presented by AI in the media and through open letters. Why is this topic now in the news?

The concept that the pursuit for powerful AI will eventually succeed was formerly regarded to be science fiction, decades or perhaps millennia in the future. However, because to recent advancements, several AI milestones that were once thought to be decades away have already been achieved, prompting many scientists to consider the potential of superintelligence in our lifetime. While some experts believe human-level AI would take millennia, the majority of AI researchers at the 2015 Puerto Rico Conference predicted it will happen by 2060. Because the essential safety study might take decades to complete, it is sensible to begin immediately.

We have no way of knowing how AI will act since it has the potential to become more intelligent than any human. We can’t take previous technical advancements as a starting point since we’ve never produced something that can outwit us, either intentionally or unintentionally. Our own development may be the finest indication of what we may face. People today rule the world, not because they are the strongest, quickest, or largest, but because they are the most intelligent. Will we be able to maintain control if we’re no longer the smartest?

FLI believes that our civilization will thrive as long as we win the contest between technology’s increasing strength and our ability to control it wisely. In the case of AI technology, FLI believes that the best approach to succeed is to assist AI safety research rather than stifle it.

THE MOST COMMON MYTHS ABOUT ADVANCED ARTIFICIAL INTELLIGENCE

A fascinating discussion regarding the future of artificial intelligence and what it will/should mean for mankind is now going place. There are intriguing debates on AI’s future influence on the labor market, whether or not human-level AI will be produced, if this will lead to an intelligence explosion, and whether or not this is something we should embrace or dread. However, there are several instances of tedious pseudo-controversies sparked by individuals misinterpreting and talking past one another. Let’s debunk some of the most popular falsehoods to help us concentrate on the engaging debates and open topics, rather than the misunderstandings.

MYTHS ABOUT TIMELINES

The first misconception concerns the timeline: when will computers vastly outperform humans in terms of intelligence? A typical misunderstanding is that we are assured of the solution.

One common misconception is that we will achieve superhuman AI this century. In truth, history is littered with examples of technical exaggeration. Where are the fusion reactors and flying automobiles that we were promised by now? AI has also been over-hyped in the past, even by some of the field’s pioneers. For example, John McCarthy (who originated the phrase “artificial intelligence”), Marvin Minsky, Nathaniel Rochester, and Claude Shannon issued the following unduly optimistic prediction about what might be done with stone-age computers in two months: “We suggest that a two-month, ten-person artificial intelligence research be conducted at Dartmouth College during the summer of 1956.” It will be attempted to figure out how to get robots to speak, develop abstractions and ideas, solve issues that are now reserved for people, and improve themselves. We believe that if a properly chosen group of scientists worked on one or more of these challenges together for a summer, considerable progress might be accomplished.”

See also  Advancing Tomorrow: Exploring the Cutting-Edge Technologies Shaping Our Future

A prominent counter-myth is that we already know we won’t develop superhuman AI this century. Researchers have come up with a variety of estimations for how far we are from superhuman AI, but given the historical record of such techno-skeptic forecasts, we can’t claim with certainty that the likelihood is zero this century. For example, in 1933, less than 24 hours before Szilard’s development of the nuclear chain reaction, Ernest Rutherford, perhaps the best nuclear scientist of his day, declared that nuclear energy was “moonshine.” In 1956, astronomer Royal Richard Woolley referred to interplanetary travel as “utter rubbish.” The most severe version of this myth holds that superhuman AI would never be attainable due to physical constraints. However, scientists know that a brain is made up of quarks and electrons organized in such a way that they function as a sophisticated computer, and that there is no physical rule that prevents us from creating even more intelligent quark blobs.

A number of studies have asked AI researchers how many years they believe it will take to achieve human-level AI with a 50% chance. All of these studies come to the same conclusion: the world’s top experts disagree, hence we don’t know. In a survey of AI experts at the 2015 Puerto Rico AI conference, for example, the average (median) response was 2045, although several researchers predicted hundreds of years or more.

People who are concerned about AI believe it is just a few years away, according to another misconception. In reality, most individuals who have expressed concern about superhuman AI believe it is still decades away. They say, however, that because we can’t be certain it won’t happen this century, it’s prudent to begin safety studies now to prepare for the possibility. Many of the safety issues involved with human-level AI are so complex that solving them might take decades. So it’s better to start looking into them now rather than the night before some Red Bull-fueled coders decide to turn one on.

MYTHS ABOUT CONTROVERSY

Another prevalent misunderstanding is that the only individuals concerned about AI and lobbying for AI safety research are skeptics who don’t understand the technology. The crowd erupted in laughter when Stuart Russell, author of the classic AI textbook, noted this during his address in Puerto Rico. Another common misunderstanding is that funding AI safety research is very divisive. People don’t need to be persuaded that dangers are great, only non-negligible, to support a moderate investment in AI safety research, just as a little investment in home insurance is justified by a non-negligible chance of the house burning down.

It’s possible that the media has made the AI safety discussion seem more contentious than it is. After all, fear sells, and publications that use out-of-context statements to herald impending doom tend to get more hits than pieces that are more nuanced and fair. As a consequence, two individuals who only know each other’s opinions via media statements are more likely to believe they differ than they really do. A techno-skeptic who only read about Bill Gates’ viewpoint in a British tabloid, for example, may infer Gates thinks superintelligence is near. Similarly, someone unfamiliar with Andrew Ng’s viewpoint outside of his comment regarding overpopulation on Mars may wrongly believe he is unconcerned with AI safety, when in reality he is. The crux is that, due to his lengthier timetable estimations, Ng automatically prioritizes short-term AI difficulties above long-term AI challenges.

MYTHS ABOUT SUPERHUMAN AI’S RISKS

When they read the headline “Stephen Hawking warns that the emergence of robots may be terrible for humanity,” many AI experts roll their eyes. Many people have lost track of the number of similar items they’ve read. These articles are usually accompanied by an evil-looking robot with a weapon, and they warn us that robots will rise up and murder us because they have become aware and/or malevolent. On a lighter side, such writings are really rather amazing since they concisely define the situation that AI researchers ignore. That scenario combines three common misunderstandings: fear of awareness, evil, and robots.

See also  Artificial Intelligence in Finance: Revolutionizing Banking and Investments

Driving along the road gives you a subjective sense of colors, noises, and so on. Does a self-driving automobile, on the other hand, have a subjective experience? Is it possible to imagine what it’s like to be a self-driving car? Although the riddle of consciousness is intriguing in and of itself, it has no bearing on AI danger. It makes no difference to you whether a driverless automobile feels aware or not if you are hit by one. Similarly, what superintelligent AI does, not how it subjectively feels, will have an impact on us humans.

Another false herring is the worry of machines becoming malevolent. Competence, not malice, is the actual concern. We must guarantee that the aims of a superintelligent AI are aligned with ours since by definition, a superintelligent AI is extraordinarily excellent at achieving its goals, whatever they may be. Humans don’t dislike ants in general, but we’re smarter than they are, so if we want to construct a hydroelectric dam and there’s an anthill nearby, the ants will suffer. The proponents of helpful AI want to save humans from falling into the same trap as the ants.

The fallacy about awareness is linked to the idea that computers can’t have objectives. Machines may clearly have goals in the restricted sense of showing goal-oriented behavior: a heat-seeking missile’s behavior can be described most economically as a desire to strike a target. If you’re worried about a machine whose aims aren’t aligned with yours, it’s the machine’s goals in this limited sense that you’re worried about, not whether it’s aware and has a sense of purpose. You wouldn’t shout, “I’m not scared, since computers can’t have objectives!” if a heat-seeking missile was following you.

I empathize with Rodney Brooks and other robotics pioneers who believe they have been unjustly maligned by fearmongering tabloids, since certain journalists seem to be obsessed with robots, and many of their pieces feature evil-looking metal creatures with red flashing eyes. In truth, the beneficial-AI movement’s fundamental issue isn’t robots, but intelligence itself: especially, intelligence whose interests aren’t aligned with ours. Such misplaced superhuman intellect need no robotic body to give us grief; all it takes is an internet connection to outwit financial markets, out-invent human researchers, out-manipulate human leaders, and design weapons we don’t even comprehend. Even if it were technically impossible to manufacture robots, a super-intelligent and super-rich AI might simply hire or persuade a large number of people to blindly execute its bidding.

The belief that robots can’t control people is similar to the robot myth. Humans are able to handle tigers not because they are stronger, but because they are wiser. This implies that if we relinquish our status as the brightest people on the globe, we may also lose power.

CONTROVERSIES OF INTEREST

We may concentrate on real and intriguing debates where even experts differ if we don’t waste time on the above-mentioned myths. What kind of future do you desire for yourself? Is it necessary to build deadly autonomous weapons? What do you want to see happen in terms of job automation? What advise would you offer today’s youth in terms of a career? Do you prefer a jobless society where everyone enjoys a life of leisure and machine-produced riches, or a jobless society where everyone enjoys a life of leisure and machine-produced wealth? Would you wish us to build superintelligent life and disseminate it over our universe in the future? Will we be able to command intelligent machines, or will they be able to command us? Will intelligent robots take our place, live with us, or eventually merge with us? In the era of artificial intelligence, what does it mean to be human? What do you want it to signify, and how can we make it so in the future? Please take part in the discussion!

Leave a Reply