This is a tricky question because an AI-program only does what it is asked to do. Let us start with a simple example.

A stockbroker firm usually has pretty high standards regarding who they hire. If they activated an AI-program to find out who they should hire out of college they need some data to let the AI know what it is looking for. Let’s say that the firm thinks the perfect employee is a graduate from Harvard, INSEAD, Oxford or LSE, is interested in football, whiskey, and Porsche. The statistics of the current employees also show that the best brokers all have close facebook friends called William, Percy, and Jurgen. Given that the firm is of international scale and get applicants from all over the world, the AI-program would most likely choose British or German male applicants given the criteria they used to feed the AI-program. This result may seem racist and sexist but the AI-program actually did the job perfectly considering the data it was given. It found new possible employees that statistically is the best match for the firm. 


Another example is the challenges Facial recognition faces when met with people with very dark or very light skin, or women with a considerable amount of makeup on. You can read more about these challenges here: Facial recognition 101.


Let’s go back to the initial example. How can we alter this algorithm to ensure that the firm also gets candidates from other countries? This is a tricky challenge as we have to narrow it down to ensure that the best candidates get noticed but also as wide as possible to ensure all races and sexes are represented. We could start by adding the best business schools in Asia and Africa. As whiskey is fairly popular we don’t need to change that, but let us add gin as it is growing in popularity in Africa. As our best candidates are very interested in cars we can also add Lexus and Hongqi. Because we have expanded our horizon we can’t use the best friend list anymore as it would be huge and we don’t have a lot of data to use so it would be fairly inaccurate. When all of this is entered into the algorithm the result should be quite different because our key factors have changed. The AI-program still does exactly what it is supposed to do. The change in this situation is what data the programmers have entered into the algorithm and this makes all the difference. 


So an AI-program isn’t racist by its own right and in most cases, a “racist” result is because of insufficient or inaccurate training data. Despite what media might say, AI-programs is still in its early stages in widespread use and only gets better over time. An AI-program might get what seems like racist results today, but with better data, there may be a totally different result a year from now.