The Ethical Concerns of AI
There’s a common misconception about AI: that the more data, the better.
Collecting big data doesn’t ensure that the results are reliable, relevant, or updated; which, in turn, doesn’t ensure that those results are serving democracy, equality, justice and wellbeing. So the ethical implications of enabling machines to perform duties once reserved only for humans can be serious. This is especially worrying in global-impacting areas like autonomous weapons and mass surveillance.
A couple of issues come into play: one is the why and how this incredibly powerful tool will be used –if the original intent of the technology is beneficial in nature. And two, if it is meant for good, there’s the question of if ethical means were used to execute this function. Introducing bias or discriminatory parameters in the development of these machine systems –even if done unintentionally– has consequences. We’re haunted by the image of unforeseen repercussions or the frequently-used movie theme of “a machine gone rogue.”
Building a model that doesn’t accurately reflect the population for which that model is intended to be used. An example is facial-recognition technology that has been shown to be less accurate with people of color and women because it was trained with a biased data set of predominantly white males. Also, there are always humans behind AI system programming –there’s the risk of the software engineers building underlying biases into the computer program.
Even if the data set being used to build the model accurately represents the history of that population or reality, there’s the risk that the history itself from which decisions are being made is unfair. An example is predictive policing –arrest records that are historically biased against certain races.
Could the systems being developed actually be used for unethical purposes? There’s a high level of concern if the machine’s capabilities are built at a scale large enough to have the possibility of resulting in destructive ramifications.
The Right to Forget
Learning from history is not the same as keeping a list of mistakes. If we can’t build technology able to forget and learn lessons from the past, we are building a rigid, unfair society based on a historical list of wrong actions.