Wanting the truth doesn’t mean you aren’t training people to lie to you. I learnt this way back when I tried answering Panelbase surveys: completing a survey earnt a small amount of money, but it had to be a completed survey, and many surveys included screening questions. Give a disliked answer to one of those and you’d be kicked out with no payment for your time (you’d be entered into a weekly prize draw, I think, as a sort of consolation award). One pattern that soon became obvious was that you’d usually be kicked out for giving a low figure when asked for your household income. (Remember that wealthy people are not likely to be doing surveys for peanuts in the first place. I think there was usually an option not to say, but it would get you kicked as well.) Which is why I’d take anything such surveys supposedly reveal about income distributions with a pinch of salt: everything we know about incentives and human nature indicates that plenty of respondents would have optimised their answers to favour returns over truth.
Recently Google’s captchas have been asking me to identify taxis. I say recently: other people have been getting this one since at least 2017. And I say taxis: actually it very obviously wants me to click on pictures of yellow cars. In this sceptred isle, however, taxis can be basically any colour and the iconic taxi is a London black cab. Are all the yellow cars Google’s showing me American taxis? Honestly, I don’t know: it isn’t always possible to say for sure that the yellow car in the photo is a taxi. But I can tell the captcha expects me to click on yellow cars before I’m allowed to accomplish whatever I was trying to, so by Jingo I will click on the yellow cars.
Perhaps one reason we can’t yet hand over driving to AI is that the poor machines are stuck inside an enormous epistemological thought experiment, fed with training data by distracted and exasperated non-Americans with a Frankfurtian indifference to truth.