Category Archives: Ethical Algorithms

Chelsea Manning to Technologists: Please Take the Time To Contemplate Your System’s Potential Misuse

Chelsea Manning will be speaking at the Fifth Annual Aaron Swartz Day Evening Event – Saturday, November 4, 2017 – 7:30 pm – TICKETS

Chelsea E. Manning at Dolores Park in San Francisco, September, 2017.

From October 8, 2017, in New York City (at the New Yorker Festival):

I think the most important think that we have to learn, because I think it’s been forgotten, is that every single one of us has the ability to change things. Each and every one of us has this ability. We need to look to each other and realize our values are what we care about, and then assert them, and say these things, and to take actions in our political discourse to make that happen. Because it’s not going to happen at the Ballot Box. It’s not.

Make your own decisions. Make your own choices. Make your own judgement.

You have to pay attention. For engineers in particular. We design and we develop systems, but the systems that we develop can be used for different things. The software that I was using in Iraq for predictive analysis was the same that you would use in marketing. It’s the same tools. It’s the same analysis. I believe engineers and software engineers and technologists. (That’s a new term that came out while I was away :-)

I guess technologists should realize that we have an ethical obligation to make decisions that go beyond just meeting deadlines or creating a product. What actually takes some chunks of time is to say “what are the consequences of this system?” “How can this be used?” “How can this be misused?” Let’s try to figure out how we can mitigate a software system from being misused. Or decide whether you want to implement it at all. There are systems where, if misused, could be very dangerous. — Chelsea E. Manning, October 8, 2017.

Excerpt WNYC The New Yorker Radio Hour (starts at 31:45):
http://www.wnyc.org/story/chelsea-manning-life-after-prison/

About the Ethical Algorithms Panel and Technology Track

See Caroline Sinders and Kristian Lum, live at 2pm, on November 4th.

Technology Track – Ethical Algorithms
2:00 – 2:45 pm – Ethical Algorithms Panel – w/Q and A.
Kristian Lum (Human Rights Data Analysis Group – HRDAG) As the Lead Statistician at HRDAG, Kristian’s research focus has been on furthering HRDAG’s statistical methodology (population estimation or multiple systems estimation—with a particular emphasis on Bayesian methods and model averaging).
Caroline Sinders (Wikimedia Foundation) – Caroline uses machine learning to address online harassment at Wikimedia, and before that, she helped design and market IBM’s Watson. Caroline was also just named as one of Forbes’ 8 AI Designers You Need to Know.” Plus Special guests TBA

About the Ethical Algorithms Panel and Technology Track
by Lisa Rein, Co-founder, Aaron Swartz Day

I created this track based on my phone conversations with Chelsea Manning on this topic.

Chelsea was an Intelligence Analyst for the Army and used algorithms in the day to day duties of her job. She and I have been discussing algorithms, and their ethical implications, since the very first day we spoke on the phone, back in October 2015.

Chelsea recently published as a New York Times Op-Ed on the subject: The Dystopia We Signed Up For.

From the Op-Ed:

“The consequences of our being subjected to constant algorithmic scrutiny are often unclear… algorithms are already analyzing social media habits, determining credit worthiness, deciding which job candidates get called in for an interview and judging whether criminal defendants should be released on bail. Other machine-learning systems use automated facial analysis to detect and track emotions, or claim the ability to predict whether someone will become a criminal based only on their facial features. These systems leave no room for humanity, yet they define our daily lives.”

A few weeks later, in December, I went to the Human Rights Data Analysis Group (HRDAG) holiday party, and met HRDAG’s Executive Director, Megan Price. She explained a great deal to me about the predictive software used by the Chicago police, and how it was predicting crime in the wrong neighborhoods based on the biased data it was getting from meatspace. Meaning, the data itself was “good” in that it was accurate, but unfortunately, the actual less-than-desirable behavior by the Chicago PD was being used as a guide for sending officers out into the field. Basically the existing bad behavior of the Chicago PD was being used to assign future behavior.

This came as a revelation to me. Here we have a chance to stop the cycle of bad behavior, by using technology to predict where the next real crime may occur, but instead, we have chosen to memorialize the faulty techniques used in the past into software, to be used forever.

I have gradually come to understand that, although these algorithms are being used in all aspects of our lives, it is not often clear how or why they are working. Now, it has become clear that they can develop their own biases, based on the data they have been given to “learn” from. Often the origin of that “learning data” is not shared with the public.

I’m not saying that we have to understand how exactly every useful algorithm works; which I understand would be next to impossible, but I’m not sure a completely “black box” approach is best at least when the public, public data, and public safety are involved. (Thomas Hargrove’s Murder Accountability Project‘s “open” database is one example of a transparent approach that seems to be doing good things.)

There also appears to be a disconnect with law enforcement, while some precincts seem to be content to rely on on technology for direction, for better or worse, such as the predictive software used by the Chicago Police Department. In other situations, such Thomas Hargrove’s, “Murder Accountability Project” (featured in the article Murder He Calculated) technologists are having a hard time getting law enforcement to take these tools seriously. Even when these tools appear to have the potential to find killers, there appear to be numerous invisible hurdles in the way of any kind of a timely implementation. Even for these “life and death” cases, Hargrove has had a very hard time getting anyone to listen to him.

So, how do we convince law enforcement to do more with some data while we are, at the same time, concerned about the oversharing other forms of public data?

I find myself wondering what can even be done, if simple requests such as “make the NCIC database’s data for unsolved killings searchable” seem to be falling on deaf ears.

I am hoping to have some actual action items that can be followed up on in the months to come, as a result of this panel.

References:

1. The Dystopia We Signed Up For, Op-Ed by Chelsea Manning, New York Times, September 16, 2017. (Link goes to a free version not behind a paywall, at Op-Ed News)

2. Pitfalls of Predictive Policing, by Jessica Saunders for Rand Corporation, October 11, 2016. https://www.rand.org/blog/2016/10/pitfalls-of-predictive-policing.html

3. Predictions put into practice: a quasi-experimental evaluation of Chicago’s predictive policing pilot. by Jessica Saunders, Priscillia Hunt, John S. Hollywood, for the Journal of Experimental Criminology, August 12, 2016. https://link.springer.com/article/10.1007/s11292-016-9272-0

4. Murder He Calculated – by Robert Kolker, for Bloomberg.com, February 12th 2017.

5. Murder Accountability Project, founded by Thomas Hargrove. http://www.murderdata.org/

6. Secret Algorithms Are Deciding Criminal Trials and We’re Not Even Allowed to Test Their Accuracy – By Vera Eidelman, William J. Brennan Fellow, ACLU Speech, Privacy, and Technology Project, September 15, 2017. https://www.aclu.org/blog/privacy-technology/surveillance-technologies/secret-algorithms-are-deciding-criminal-trials-and

7. Machine Bias – There’s software used across the country to predict future criminals. And it’s biased against blacks. by Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, ProPublica, May 23, 2016. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

8. Criminality Is Not A Nail – A new paper uses flawed methods to predict likely criminals based on their facial features. by Katherine Bailey for Medium.com, November 29, 2016. https://medium.com/backchannel/put-away-your-machine-learning-hammer-criminality-is-not-a-nail-1309c84bb899

Saturday, November 4, 2017
2:00 – 2:45 pm – Ethical Algorithms Panel – w/Q and A.
Kristian Lum (Human Rights Data Analysis Group – HRDAG)
As the Lead Statistician at HRDAG, Kristian’s research focus has been on furthering HRDAG’s statistical methodology (population estimation or multiple systems estimation—with a particular emphasis on Bayesian methods and model averaging).
Caroline Sinders (Wikimedia Foundation)
Caroline uses machine learning to address online harassment at Wikimedia, and before that, she helped design and market IBM’s Watson. Caroline was also just named as one of Forbes’ 8 AI Designers You Need to Know.” Plus Special guests TBA

 

Caroline Sinders Named By Forbes as an “AI Designer That You Need To Know”

See Caroline Sinders at this year’s Aaron Swartz Day International Hackathon, at the San Francisco Hackathon‘s Ethical Algorithm Panel, Saturday at 2pm, and at the evening event, Saturday night, November 4, 7:30 pm.

8 AI Designers That You Need To Know by Adelyn Zhou for Forbes.

Caroline Sinders – Machine Learning Designer and Researcher, former Interaction Designer for IBM Watson

Caroline Sinders Caroline Sinders

Caroline is an artist, designer, and activist who also loves writing codes. She helped design and market IBM Watson, a billion-dollar artificial intelligence system built on advanced natural language processing, automated reasoning, machine learning, and other technologies. Sinders’ work on Watson focused on user flows and the impact of human decision-making in the development of robotics software. She recently left her dream job at IBM to pursue an equally challenging fellowship at Open Labs. A passionate crusader against online harassment, Caroline probes the different ways design can influence and shape digital conversations, with the ultimate goal of using machine learning to address online harassment. You can weigh her strong opinions on Twitter, Medium, LinkedIn, and her personal website.