Artificial Intelligence, Curse or Blessing?

ALTEN AI
, , ,

Artificial Intelligence, Curse or Blessing? Recently I attended an event organised by the Dutch Police. A major topic was Artificial Intelligence (AI). Two speakers, Deborah Nas and Yori Kamphuis, were invited to talk about AI and what impact this may have in the future, but also in our current lives. These talks were an eye-opener that I would like to share with you.

But first, what is AI? If we use the term Artificial Intelligence, we often refer to a group of techniques that strive to let computers make automated decisions. Simply said, we recognise 3 levels of AI:

  • Artificial Intelligence: Using an algorithm (predefined set of steps) to perform specific tasks. AI applies gained knowledge.
  • Machine Learning: Automatic pattern recognition (learning) in Data. I.e. the acquisition of skills or knowledge. Determining an algorithm is the end-product of the learning process. The algorithm can be used by AI.
  • Deep learning: This is a subset of machine learning. If a machine learning model returns an inaccurate prediction then the programmer needs to fix that problem explicitly but in the case of deep learning, the model does it by himself.

AI is rapidly evolving and new uses pop up every day. We all know the examples like Amazon trying to recommend what our next purchase could be. And more recently we’ve heard stories of facial recognition in China which could be used in a social credit system. But AI also supports advances in cancer treatment allowing for very detailed personalised treatment plans that take factors like gender, ethnicity, culture etc. into consideration. Or how about fake advertisement of celebrities recommending Bitcoin or Google’s home speakers ordering a pizza.

AI is becoming a bigger part of our every day lives and sometimes we don’t even recognise anymore what is real and what is generated. Have a look at

the site www.thispersondoesnotexist.com . It generates portraits of people that don’t exist. You will probably have a harder time believing that these people really don’t exist than that it’s possible to generate such pictures. But similar things happen with audio and video. By using a single photo of someone, it is possible to render the other angles of someone’s face. Or how about projecting your own facial features in real time onto a video of someone else so that it mimics you (deepfake). Another example is the reconstruction of someone’s voice based on their facial features. Think of the possibilities.

Technology is advancing rapidly in this area. At the moment you already might have a hard time separating fact from fiction. Generating images and manipulating audio and video doesn’t even require huge amounts of processing power. This can be done using a normal PC and is therefore within  everyone’s reach. And we’re only at the beginning of the AI journey. Imagine what lies ahead.

As we are still in the childhood stages of AI, we learn by trial and error. Quit often, a model doesn’t behave as we expect it to. Take for example the Amazon recruiting engine that had a bias against women. They created an algorithm that would select the most suitable applicants for software development jobs. It turned out that the algorithm had a strong bias towards male candidates. This preference was caused by the training data that was used to train and develop the model. Amazon had used the last 10 years of hiring data to train the model and as IT jobs mostly attract males, the preference for the males as successful candidates had creeped into the algorithm. So, it was not so much a problem of the method used, but of the data that were used to train the model.

Another example was Microsoft’s automated twitter bot Tay . Tay was a machine learning experiment that used unsupervised learning. Within 16 hours after its launch, Microsoft had to shut it down as it was twittering highly offensive tweets. Microsoft attributed this to directed attacks by trolls on the system, but it shows a big issue with self-learning algorithms. They have a hard time distinguishing right from wrong especially when there is no guidance.

This bias is often seen as it basically reflects our own historical behaviour. If we use this to train our models, then the model will incorporate that same behaviour and will reinforce the bias even more.

Transparency in the way the algorithms work and how it affects our decisions is needed. But quite often this is a hard question to answer. Often enough you have intellectual property issues to deal with. A lot of work has gone into the development of the algorithm and the owner will be reluctant to let you get a view behind the scenes. In machine learning situations, it may even become virtually impossible to determine how the algorithm came to be, especially since the amount and variety of data, and therefore the significant factors that contribute to the algorithm, are ever increasing. So even if we wanted to understand, we simply can’t.

Which means that we need to trust the algorithms but only up to a certain point and keep a sceptic attitude towards them. We simply can’t take the results at

face value and we should add a human factor in the final decision. But how can you trust something if you don’t understand how it works? How do you know its is not guiding you along the wrong path? The answer may come from incorporating inherent governance or principles that guide the algorithms in making the right choices. Think of the three laws of robotics drawn up in 1942 by Isaac Asimov:

  • First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

And later added to that the Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

By incorporating similar laws into AI algorithms, we can prevent fundamental flaws. This however will not solve all issues with IA. But it does create a basis for trust if we can verify that the algorithm abides by our governing laws.

I hope this short introduction to the ethical aspects of AI is as much an eye-opener to you as it was for me. There is great potential, both for good and bad. We’re now only at the beginning of this journey and I am curious as to what the future will bring.

Rob Kool, BI Consultant