(Article by Pedro Domningos republished from Spectator.com.au)
You can see this unfolding at AI conferences. Last week I attended the 2020 edition of NeurIPS, the leading international machine learning conference. What started as a small gathering now brings together enough people to fill a sports arena. This year, for the first time, NeurIPS required most papers to include a ‘broader impacts’ statement, and to be subject to review by an ethics board. Every paper describing how to speed up an algorithm, for example, now needs to have a section on the social goods and evils of this obscure technical advance. ‘Regardless of scientific quality or contribution,’ stated the call for papers, ‘a submission may be rejected for… including methods, applications, or data that create or reinforce unfair bias.’
This was only the latest turn of the ratchet. Previous ones have included renaming the conference to something more politically correct and requiring attendees to explicitly accept a comprehensive ‘code of conduct’ before they can register, which allows the conference to kick attendees out for posting something on social media that officials disapproved of. More seriously, a whole subfield of AI has sprung up with the express purpose of, among other things, ‘debiasing’ algorithms. That’s now in full swing.
I posted a few tweets raising questions about the latest changes — and the cancel mob descended on me. Insults, taunts, threats — you name it. You’d think that scientists would be above such behaviors, but no. I pointed out that NeurIPS is an outlier in requiring broader impact statements, and the cancelers repeatedly changed the subject. I argued against the politicization of AI, but they took that as a denial that any ethical considerations are valid. A corporate director of machine learning research, also a Caltech professor, published on Twitter for all to see a long list of people to cancel, their sole crime being to have followed me or liked one of my tweets. The same crowd succeeded in making my university issue a statement disavowing my views and reaffirming its liberal credentials.
Why the fuss? Data can have biases, of course, as can data scientists. And algorithms that are coded by humans can in principle do whatever we tell them. But machine-learning algorithms, like pretty much all algorithms you find in computer-science textbooks, are essentially just complex mathematical formulas that know nothing about race, gender or socioeconomic status. They can’t be racist or sexist any more than the formula y = a x + b can.
Daniel Kahneman’s bestselling book, Thinking, Fast and Slow, has a whole chapter on how algorithms are more objective than humans, and therefore make better decisions. To the militant liberal mind, however, they are cesspools of iniquity and must be cleaned up.
What cleaning up algorithms means, in practice, is inserting into them biases favoring specific groups, in effect reestablishing in automated form the social controls that the political left is so intent on. ‘Debiasing’, in other words, means adding bias. Not surprisingly, this causes the algorithms to perform worse at their intended function. Credit-card scoring algorithms may reject more qualified applicants in order to ensure that the same number of women and men are accepted. Parole-consultation algorithms may recommend letting more dangerous criminals go free for the sake of having a proportional number of whites and blacks released. Some even advocate outlawing the use in algorithms of all variables correlated with race or gender, on the grounds that they amount to redlining. This would not only make machine learning and all its benefits essentially impossible, but is particularly ironic given that those variables are precisely what we need to separate decisions from the ones we want to exclude.
If you question this or any other of a wide range of liberal demands on AI, you’re in for a lot of grief. The more prominent the researcher that gets canceled, the better, because it sends the most chilling message to everyone else, particularly junior researchers. Jeff Dean, Google’s legendary head of AI, and Yann LeCun, Facebook’s chief AI scientist and co-founder of deep learning, have both found themselves on the receiving end of the liberal posse’s displeasure.
Conservatives have so far been largely oblivious to progressive politics’ accelerating encroachment on AI. If AI were still an obscure and immature field, this might be OK, but the time for that has long passed. Algorithms increasingly run our lives, and they can impose a militantly liberal (in reality illiberal) society by the back door. Every time you do a web search, use social media or get recommendations from Amazon or Netflix, algorithms choose what you see. Algorithms help select job candidates, voters to target in political campaigns, and even people to date. Businesses and legislators alike need to ensure that they are not tampered with. And all of us need to be aware of what is happening, so we can have a say. I, for one, after seeing how progressives will blithely assign prejudices even to algorithms that transparently can’t have any, have started to question the orthodox view of human prejudices. Are we really as profoundly and irredeemably racist and sexist as they claim? I think not.
Read more at: Spectator.com.au