Why I Have (Some) Hope for the Future of Automated Decision Making

The release of technology without any forethought to its impact has been creating detrimental effects in ways that cannot be easily undone due to the continuously intertwining nature of technology in daily life. These effects range from massive data breaches of personal information to surveillance capitalism. 

However, it’s the increasing digitization of decision making that is the most pressing issue currently facing society due to its wide ranging implications. Automated decision making impacts everything from suggesting your next Netflix show to pre-trial assessments, credit scores, and government service delivery. Algorithms running rampant have also allowed fake news to flourish across the globe, disintegrating the fabric of society and its trust in institutions. I see this playing out in the case of Kenya, where fake news impacted the results of their most recent elections. This is why I am assisting with the research on an accountable algorithms project with the National Democratic Institute in partnership with Article 19 to try and find ways to retrain the algorithms. Automated decision making relies on algorithms to produce an output based on the inputs of collected data. The data that we personally train the algorithms on, through our likes and shares, is being fed into a system that is already filled with skewed data that the algorithm was originally trained on. The “garbage in, garbage out” problem refers to the lack of representation in the training data, leading to biased and racist algorithmic outputs. The bubbles in which algorithms are created are starting to burst, as these outputs are creating real world harm.

Thankfully, as the issue of accountable algorithms has started to gain public awareness, a lot of unique solutions have started to crop up. Tackling this issue at its root is one of the most impactful ways I have seen that might bring about real change. The roots, in my mind, are the programmers themselves and the data creators. The latest efforts to bring ethics courses into the programs at top computer science universities is a great step to bring more awareness to issues of representation and harm reduction. My own efforts in tackling this issue have been to bring together open data experts from my Open Gov Hub community with algorithm policy experts to share lessons learned and find areas of commonality and collaboration along the supply chain of AI. I am eager to continue these meetings but now with the input of programmers. The rise of the tech worker as an advocate for beneficial technology has also provided a great deal of change in regards to automated decision making and what tech companies feel allowed to do. 

Of course, the creation of algorithms isn’t just done by the major tech companies, but by small contractors who work with companies and governments looking to scale up efficiencies. However, without the correct frameworks in place for how algorithms should be programmed, trained, and deployed, the unintentional consequences could and do harm a great deal of people. This is why some of the work being done by organizations like AI Now, Upturn, and Access Now to help create ethics and rights frameworks for algorithms to be designed around is so important. The necessity of regulating algorithms and AI to minimize harm and to provide for redress is vital for the protection of society, as our lives will increasingly be impacted by these technologies. With a handful of bills currently introduced in Congress relating to accountable algorithms, the unruly ways of automated decisions might hopefully have some order instilled.

Despite some of these solutions and the great work that is being done by a wide range of researchers, programmers, foundations, and policy experts, we still have a long way to go in creating a more equitable world. Whereas redress is one of the core principles of the international human rights system, it is a bit tricky to implement when it comes to data and AI because most often one doesn’t even know what personal data is out there or if AI was used in the decision making process. We also haven’t quite figured out yet how we hold governments and companies accountable, or how citizens can have a voice and be represented accurately in their data. It’s a wild time, full of uncertainty and change, but through cross-sectoral collaborative learning and work, more solutions will continue to unfold.