Artificial intelligence is now a well-established part of our lives and is developing exponentially, with its usage expanding. It is not quite yet limited only by our imaginations, as the technology itself does have limits — for now. However, advanced AI has the chance to heal our nations of discrimination or be abused to aid terrib le human rights atrocities. A panel at last month’s Future Investment Initiative conference, entitled “Preparing for Sentient AI,” discussed the potential impact of sentient AI. Sentient was defined as being able to “think, perceive and feel, in a highly natural way, with compassion.” Alan Turing, the British scientist, developed a standard test — now known as the Turing test — in 1950 to judge whether a computer had self-awareness and could answer a series of questions indistinguishably from a human. Having used customer service chatbots, I think we are a long way off achieving this.
The CEO of Omdena, Rudradeb Mitra, spoke of how AI could remove human bias in many areas, with one example being recruitment. However, he made the point that algorithms are only as good as their programming and humans would need to be careful not to transfer their own biases into the data they input. For example, if a computer were looking for the best candidate to be an engineer and it looked at past data of what attributes made the most successful engineers, this could exclude women, as there have been far fewer female engineers trained and employed in the past. So, care is needed not to set up the data with a discriminating bias.
Mitra believes that the right algorithms can have a very positive impact. He said: “We have a good opportunity to use AI to make a more just society, with equal opportunities for all, without the biases of humans, without preferring one over another. The more bias we build in, the more we divide society; in contrast, we can use AI to be more ethical and create more opportunities.” Intuition and context, minus bias, are vital to retaining our humanity and our basic human rights
As mentioned, there is a risk of great abuse with AI, not least affecting the right to privacy. Many companies and IT platforms share their users’ data, with no or little choice to opt out if you wish to use their services. The ease with which data can be shared makes people very vulnerable. In extreme, dictatorial governmental regimes, information can be used against citizens, reminiscent of George Orwell’s dystopian novel “1984.” Not only can it be used to limit people’s freedoms generally, but with greater DNA sequencing we ignite highly controversial topics such as governments dictating who can and cannot have children together to avoid disabilities, as well as general genetic discrimination over career freedoms, such as in the Hollywood movie “Gattaca.”
On the flip side, those without access to IT become invisible and unable to access services. The homeless, for example, who are already vulnerable and experience high levels of discrimination, become even less of a part of society, even with regard to the simple notion of money. In a world where technology controls how we pay for food, those who are not “connected” to the network, cannot pay. They lose more than their identity — they lose the ability to attain the basic resources needed to survive, particularly as fewer services are now run by humans. Instead, they are run by AI bots, which have no compassion. And as they are currently unable to have all the answers, the system can lead people to dead ends.
To some of us, this can be incredibly frustrating, as it may be the difference between getting accepted for a business loan or not. To others, who need the resource in question to survive to the next day, it literally might be life or death.Antonio Simeone, the co-founder of Euklid, stated that, as we develop this technology, there should be a strong bridge between scientists, policymakers and corporations in the writing of a framework of ethical limitations. I would say this needs to be much more universal, with global parties all joining together to integrate and design that framework. It needs a full international board to ensure integrity and security.Mozn CEO Mohammed Alhussein was optimistic that, in the near future, we could attain AI that can “feel” or make decisions, but was cynical that it would not require the input of a human to provide context and compassion in life-or-death situations. The panel concluded that human decisions are made up of more than just data; we also use intuition and context. AI is changing the world, with wonderful, life-improving innovations and perhaps at a faster rate than we can keep pace with. As the technology advances, we must take care that we do not lose control and ensure we keep the context in decision-making. We must either ensure.
we use diverse teams to screen input data for biases or program AI carefully to allow it to do this for us. Intuition and context, minus bias, are vital to retaining our humanity and our basic human rights.
Humans must maintain control in AI decision-making
Date: