I'm not saying that all people in tech are racist, just that a lack of understanding of how racism works, in terms of unconscious and institutional bias, can result in those biases finding their way into the tech itself. Systemic racism in the tech industry is, as Sue says, a widely acknowledged problem: here's what I got when I Googled "racism tech industry":
racism tech industry
Regarding racism in facial recognition software, again, the problem is widely understood to go beyond the open source software mentioned by Buolamwini in the article I linked to:
her own research addresses three commercial systems (also
see here, on both open source and commercial software). Maybe the Neoface people supplying the UK police have sorted all this out: maybe I shouldn't make assumptions. On the other hand, since the software hasn't undergone demographic testing for this known problem, why not?
Regarding institutional racism in the UK police, it's not really disputed, as Andrew C concedes. There's a mountain of evidence.