top of page

AI systems are biased, and cloud availability makes it worse

Dernière mise à jour : 3 mars 2023


AI systems are biaised

According to Gartner Research, from now until the year 2022, 85 percent of AI projects will deliver erroneous outcomes due to bias in data, algorithms, or the teams responsible for managing them. Moreover, 85 percent of Americans currently use at least one AI-powered device, program, or service, the Gallup polling organization reports.

This bias is something I’ve known for a while as I followed AI systems from the late 1980s. The fact of the matter is that people program and teach AI systems, so those AI systems will tend to have the innate biases of the people who teach them.

The use of the cloud to host pricey AI systems is actually making things worse, because the number of companies that can afford AI has gone up but the number of people with solid AI skills has not grown at the same pace. So, in addition to that innate bias being in AI tools used more broadly, the lack of talent also means more mistakes in how the knowledge-bases are built are going to be common for some time.

What do these biases look like? Women may find that they are getting the short end of the stick. That’s because men do the majority of AI development and teaching, so their conscious or unconscious biases get encoded. For example, a 2015 study showed that in a Google images search for “CEO,” just 11 per cent of the people it displayed were women—despite the fact that 27 percent of the chief executives in the US are female. (While it’s easy to pick on Google, it moved fast to correct such issues.)

Companies will have to pay for these built-in AI biases. For example, they will need to absorb the profit hit of not writing loans to enough women, who comprise about 55 percent of the market. Also, such as is bad karma at the least, and it will get you into legal hot water at the worst.

What can be done about this? The reality is that biased AI systems are more the norm than the exception. So IT needs to recognize that the biases exist, or may exist, and take steps to limit the damage. Fortunately, tools are emerging to help you spot AI-based biases.

Still, you’ll have to be on the lookout for hidden biases and take action to minimize the harm.



32 vues

Comments


bottom of page