Thoughts from the CEO – What is Ethical AI, is it Real?

The broad definition of AI, or artificial intelligence, is the study and development of computer systems (machines) that can learn and think, much in the same way as the human mind does; based on what the machine has learned it can apply knowledge to make a decision, come to a conclusion or solve a problem.

But what if the decision – that ultimately leads to action – is harmful? What if, instead of doing good, the end result is one that causes harm or ill? Should that be controlled? Can it be controlled?

With the speed at which technology, AI and machine learning is developing and affecting our world at such a mass scale, there are inherent fears and ethical implications that come with this amount of power – of which the ultimate capabilities are not yet fully understood. As such, the quality of the data itself poses an ethical dilemma: if we allow algorithms to decide our future, based on data from the past, we’re the culprits of repeating the same mistakes. In this case, social progress is damaged and will move backwards – not forward.

It’s important to examine the positives and negatives of artificial intelligence. Do the benefits outweigh the risks? Is there such a thing as ethical AI?

The Ethical Concerns of AI

There’s a common misconception about AI: that the more data, the better.

Collecting big data doesn’t ensure that the results are reliable, relevant, or updated; which, in turn, doesn’t ensure that those results are serving democracy, equality, justice and wellbeing. So the ethical implications of enabling machines to perform duties once reserved only for humans can be serious. This is especially worrying in global-impacting areas like autonomous weapons and mass surveillance.

A couple of issues come into play: one is the why and how this incredibly powerful tool will be used – if the original intent of the technology is beneficial in nature. And two, if it ismeant for good, there’s the question of if ethical means were used to execute this function. Introducing bias or discriminatory parameters in the development of these machine systems – even if done unintentionally – has consequences. We’re haunted by the image of unforeseen repercussions or the frequently-used movie theme of “a machine gone rogue.”

I would categorize the main areas of risk as:

  • Bias

Building a model that doesn’t accurately reflect the population for which that model is intended to be used. An example is facial-recognition technology that has been shown to be less accurate with people of color and women because it was trained with a biased data set of predominantly white males. Also, there are always humans behind AI system programming – there’s the risk of the software engineers building underlying biases into the computer program.

  • Fairness

Even if the data set being used to build the model accurately represents the history of that population or reality, there’s the risk that the history itself from which decisions are being made is unfair. An example is predictive policing – arrest records that are historically biased against certain races.

  • Unethical Use

Could the systems being developed actually be used for unethical purposes? There’s a high level of concern if the machine’s capabilities are built at a scale large enough to have the possibility of resulting in destructive ramifications.

  • The Right to Forget

Learning from history is not the same as keeping a list of mistakes. If we can’t build technology able to forget and learn lessons from the past, we are building a rigid, unfair society based on a historical list of wrong actions.

Directing the Power of AI to Boost Social Good

As with all timeless debates, there’s always the flip side. Along with risks, artificial intelligence has countless benefits. Let’s turn over the tarnished and ugly side of the coin and take a look at its shiny and promising side.

Deep learning – the form of machine learning that teaches a computer to perform tasks based on text, sound or images – has proven to be unequivocally faster and more efficient than humans at identifying, processing, classifying and executing tasks. This is immense power that’s had real-world application for improving society and social good.

AI technology is already used in the financial services sector to protect consumers against fraud; audio-sensor data is being used for environmental conservation efforts around the globe; in the healthcare field, disease-detection artificial intelligence systems have been used to examine skin images for cancer diagnosis.

At Citibeats, we’ve seen our AI text analytics make significant impact on faster response times in areas of disaster relief, in the development of social and hate speech policy and social policy, and the inclusion of citizen feedback in the strategic approach to meeting UN Sustainable Development Goals (USDG).

Ways to Minimize the Risks of AI

The ethical concerns of AI are a serious matter. However, the effect AI has for doing goodin the world cannot, and should not, be ignored.

Some ways to minimize this risks are by aggregating massive amounts of data into insights on cohorts of citizens, versus on individual or micro levels, to reduce bias and discrimination. Using advanced categorizations systems can minimize unreliable data by sifting through bots to identify and disqualify fake news.

We’ve also seen entities taking ownership of this responsibility by establishing their own set of guidelines to keep the research, development, operation and use of AI human-centered and beneficial to society at large.

On May 29, 2019, NTT Data announced its AI ethics guidelines. The five principles include:

  1. Realizing Well-Being and Sustainability of Society
  2. Co-creating New Values by AI
  3. Fair, Reliable and Explainable AI
  4. Data Protection
  5. Contribution to Dissemination of Sound AI

This is a good start to paving the way for others to follow regarding a set of beliefs that will shape a positive and harmonious coexistence between AI and society. Knowing whether or not this is enough to keep AI ethical – is a matter of time, dedication and persistence.

Inter also published their ethical principles for the digital development which consists on a list of best practices on technology to avoid

Vision for the Future

The ethics of data and artificial intelligence use is a subject that impacts us all. It is a trap to believe that technology can solve ethical, social or political problems. That task falls on us. “With great power comes great responsibility” – and we are responsible for the citizens of our shared world, and taking all means available to maintain fairness, peace and equality.

This realm of ethical AI is far from solved or from being completely understood. The ideal vision is that machine learning will be used solely as a means for useful, beneficial and ethical ends; that the future will further untap the tremendous potential this powerful tool has to do social good – and to keep the benefits far greater than the risks.

So, is ethical AI real – can it exist?

In our vision of the world, we believe that it can.