In a world increasingly dominated by technology, you would think an ethical approach to the issue would be at the top of our priority list. Unfortunately, due to the profit-first-ask-questions-later model that has proliferated throughout many Silicon Valley startups, ethics has been put on hold. But what makes technology ethical, and how does the failure to create such technology affect us?
With the rise of algorithms within the college financial aid and admissions process, the material consequences at the behest of those algorithms has become more and more apparent. The creator of the algorithm is given a near impossible task, the creation of a system that analyzes data in a human-like way, without human bias. As put by Alex Engler of the Brookings Institute, “Higher education is already suffering from low graduation rates, high student debt, and stagnant inequality for racial minorities — crises that enrollment algorithms may be making worse.” Even when the task seems simple, a quantification of desired attributes in an application, the vast number of applications creates a tough situation for outliers in extenuating circumstances, often those which may hold the most promise. In addition, many institutions have been criticized for their prioritization of revenue within the application process, believing the high importance placed upon zipcode perpetuates racial and class inequalities.
With automation at many levels inevitable, thinking about where our practices can be improved is a necessary step towards equitable systems. Many opponents of algorithms used by colleges in admissions and financial aid have pointed towards potential reform policies, first among them, transparency within the effects of algorithms on acceptance rate, in order to hold the institutions accountable for the consequences of their use of technology, purposeful or not. Other opponents advocate for stronger human mediation, double-checking the work of the algorithm at a now enhanced speed.
Although student backlash is significant, many colleges are holding strong. In response to claims of inequity, Madeleine Rhyneer, dean of enrollment management at the Education Advisory Board said, “We try to remind them that there’s more power in their hands than they often feel is the case, as they’re putting their whole life in front of anonymous admission committees.’’ Rhyneer and many others argue that the role algorithms play in determining financial aid and admissions is easily offset by the other aspects of strong applicants. A bad grade from the algorithm is claimed to just be a drop in the bucket when compared to other qualifications.
As proven by documents from a Markup study, over 500 universities use an algorithm which has strong racial biases to determine the academic risk of students. For example, Black women were 2.8 times more likely to be labeled as high risk in comparison to white women, and Black men were 3.9 times more likely when compared to white men. If our algorithms don’t reflect a better future, it is likely that future will never come about.