Why Are Technically Trained People Afraid of Artificial Super Intelligence?

Other than the Terminator reference in Bob’s answer, I’ll also refer to I, Robot with Will Smith. While both are somewhat Sci-Fi, they are not too far-fetched given what we know about artificial intelligence so far.

One popular approach to Artificial intelligence is machine learning. It’s been widely used in practice now – from providing relevant content for related products in eCommerce through deciding on the risk factor when applying for a mortgage up to designing smart cleaning robots that study the placement of your furniture and design their path accordingly.

With machine learning, software engineers design a rule set that is automatically adjusted depending on the influx of new data. For starters, machine learning goes through three phases:

  1. Training phase
  2. Validation or Test phase
  3. Application phase

Essentially, you design a subset of data that resembles real-world information based on the best practices that you would expect the machine to accept as rules and regulations. The validation phase submits another subset of more randomized data which is still eligible for interpretation but may include some edge cases or exceptions.

After all, a machine should be trained in a way that would expect all sorts of incoming data. As security consultants often teach web developers: “All user input is evil”.

After passing the validation test, the machine is placed in a real-world environment. This is the beta phase before launching an ML product or a machine to the market. Ideally, the combination of the training data and the extensive test during the validation phase should prepare the algorithm for handling real-life scenarios.

How could that get out of control?

AI is applicable in plenty of industries. From building cars (as seen in Elon’s Gigafactory) to automated coffee makers serving coffee when you wake up or depending on the lightning in the room through self-driving cars, security guards, automated watering pipes, and so on.

All of those samples are already available on the market – with a lot more innovation in different industries.

The thing is that there are various scenarios where this may go wrong.

On a small scale, the coffee maker may try to estimate the volume of your mug before dripping coffee. A miscalculated judgment may actually burn your entire coffee maker or your rug.

A security guard that adjusts visitors for a restaurant or a hotel (studying all of the new data with hundreds of new daily visitors) may intercept a pattern that kicks out half of the people due to an influx of new data incompatible with the original training set.

When you think of it, this gets far more complicated with self-driving cars or even self-driving planes or rockets coming soon. Imagine all of the accidents that could happen on a scale.

What Bill Gates or Elon Musk see is the bigger picture.

The bigger picture involves a world where AI is not extraordinary but something widely available to humanity. Imagine smart homes and self-driving cars for every single family. Or United, Lufthansa, and Emirates employing 100% AI in all of their planes and excluding pilots from the equation.

A series of accidents could lead to a major breakdown of the economy. Since it’s not an isolated accident anymore, millions of cars are subject to the same incident, along with tens of millions of homes that may lock you out or even turn on all utilities which would cause an electrical charge that could burn your house out of nowhere.

On a scale, this may lead to a traffic lockdown and a major recession worth tens of billions of dollars, if not more. In a single day or a week.

There are other conspiracy opportunities as well – with foreign hackers working for AI development companies or so.

With that in mind, all of that AI logic would be built by programmers. Some more experienced, others – not so much.

The software development industry is competitive as well. There’s the constant pressure of winning the government or public bids for high-scale systems. Companies are racing to the bottom price-wise in order to win a high-scale deal. This means lower margins and tight time-frames, translated to a higher chance of mistakes or less qualified staff working on those algorithms.

Uncle Bob is one of the most influential figures in software development over the past decades. Here are a couple of paragraphs from his own blog:

The public has been made aware that programmers can be culprits. This will make it more likely that the next time something goes wrong — a plane crash, a fire, a flood — that the public will jump to the conclusion that some programmer caused it. Yes, this is a stretch; but it wasn’t so long ago that the concept of programmer implication in disasters was non-existent.

If we wanted to, if we were willing to organize and plan, there would be no force on the planet that could stop us. Anyone who tried to stop us would suddenly find that none of their cell phones worked, none of their gas pumps pumped, none of their credit cards were valid, none of their fighter jets flew, none of their cruise missiles cruised, all of their bank accounts were overdrawn, none of their bills had been paid in a year, there were warrants out for their arrest, and there was no record of them ever being born.

While this sounds a bit more dramatic and extreme, it’s somewhat probable. With software development nowadays, there’s a single point of failure – relying on software developers and establishing somewhat straightforward regulations that could be located to a single tier.

With AI, the point of failure may occur both through development mistakes (or carefully planned bursts) and due to a machine taking a 180 based on a certain set of requirements.

It’s no longer uncommon for car manufacturers or phone factories to call hundreds of thousands of cars or phones back due to a factory error or an algorithmic error. 

The ramifications aren’t limited to just financial losses or brand reputation. In some cases, particularly in the automotive sector, lives could be at stake.

Recalls, once associated primarily with hardware defects or manufacturing oversights, now encompass software and AI anomalies. It’s a testament to the profound impact AI has on modern production. This integration brings along unmatched efficiencies and innovations but also introduces a new spectrum of vulnerabilities.

What could happen if everything around us was AI-based? For more insights on AI for businesses, here is a Forbes roundup discussing the ways you can incorporate AI into your business.