Natural Language Processing (NLP) applications are now ubiquitous and used by millions of individuals worldwide on a daily basis. Nevertheless, these applications can be overwhelmingly brittle and biased. For example, it has been seen that the accuracy of syntactic parsing models drops by at least 20 percent on African-American vernacular English when compared to textbook-like English (how it is commonly spoken by the more privileged class of Americans). Further, sentiment analyzers fail on language originating from different time periods, question-answering systems fail on British English, conversational assistants struggle to interact with millions of elderly people with speech disabilities, and hate speech detection systems are biased and more likely to classify language from specific demographics incorrectly as offensive. In short, NLP models and applications work well only for a minority of the population, effectively excluding a significant majority that uses such applications exactly as often. It is shocking to see that roughly 6500 languages are spoken in the world today, however, the advancement in NLP in academia and industry focuses on a minuscule subset. Given the rapid rate of technology adoption globally, there is a pressing need for measuring and understanding NLP performance inequalities across the world’s languages. In this blog post, we summarize two recent publications that address this matter...