Category Archives: Ethical Algorithms

Chelsea Manning’s Op-Ed for the NY Times: The Dystopia We Signed Up For

From September 13, 2017

The Dystopia We Signed Up For

By Chelsea Manning

In recent years our military, law enforcement and intelligence agencies have merged in unexpected ways. They harvest more data than they can possibly manage, and wade through the quantifiable world side by side in vast, usually windowless buildings called fusion centers.

Such powerful new relationships have created a foundation for, and have breathed life into, a vast police and surveillance state. Advanced algorithms have made this possible on an unprecedented level. Relatively minor infractions, or “microcrimes,” can now be policed aggressively. And with national databases shared among governments and corporations, these minor incidents can follow you forever, even if the information is incorrect or lacking context…

In literature and pop culture, concepts such as “thoughtcrime” and “precrime” have emerged out of dystopian fiction. They are used to restrict and punish anyone who is flagged by automated systems as a potential criminal or threat, even if a crime has yet to be committed. But this science fiction trope is quickly becoming reality. Predictive policing algorithms are already being used to create automated heat maps of future crimes, and like the “manual” policing that came before them, they overwhelmingly target poor and minority neighborhoods.

The world has become like an eerily banal dystopian novel. Things look the same on the surface, but they are not. With no apparent boundaries on how algorithms can use and abuse the data that’s being collected about us, the potential for it to control our lives is ever-growing.

*** full text below for archival purposes***

The Dystopia We Signed Up For

By Chelsea Manning

For seven years, I didn’t exist.

While incarcerated, I had no bank statements, no bills, no credit history. In our interconnected world of big data, I appeared to be no different than a deceased person. After I was released, that lack of information about me created a host of problems, from difficulty accessing bank accounts to trouble getting a driver’s license and renting an apartment.

In 2010, the iPhone was only three years old, and many people still didn’t see smartphones as the indispensable digital appendages they are today. Seven years later, virtually everything we do causes us to bleed digital information, putting us at the mercy of invisible algorithms that threaten to consume our freedom.

Information leakage can seem innocuous in some respects. After all, why worry when we have nothing to hide?

We file our taxes. We make phone calls. We send emails. Tax records are used to keep us honest. We agree to broadcast our location so we can check the weather on our smartphones. Records of our calls, texts and physical movements are filed away alongside our billing information. Perhaps that data is analyzed more covertly to make sure that we’re not terrorists — but only in the interest of national security, we’re assured.

Our faces and voices are recorded by surveillance cameras and other internet-connected sensors, some of which we now willingly put inside our homes. Every time we load a news article or page on a social media site, we expose ourselves to tracking code, allowing hundreds of unknown entities to monitor our shopping and online browsing habits. We agree to cryptic terms-of-service agreements that obscure the true nature and scope of these transactions.

According to a 2015 study from the Pew Research Center, 91 percent of American adults believe they’ve lost control over how their personal information is collected and used.

Just how much they’ve lost, however, is more than they likely suspect.

The real power of mass data collection lies in the hand-tailored algorithms capable of sifting, sorting and identifying patterns within the data itself. When enough information is collected over time, governments and corporations can use or abuse those patterns to predict future human behavior. Our data establishes a “pattern of life” from seemingly harmless digital residue like cellphone tower pings, credit card transactions and web browsing histories.

The consequences of our being subjected to constant algorithmic scrutiny are often unclear. For instance, artificial intelligence — Silicon Valley’s catchall term for deepthinking and deep-learning algorithms — is touted by tech companies as a path to the high-tech conveniences of the so-called internet of things. This includes digital home assistants, connected appliances and self-driving cars.

Simultaneously, algorithms are already analyzing social media habits, determining creditworthiness, deciding which job candidates get called in for an interview and judging whether criminal defendants should be released on bail. Other machine-learning systems use automated facial analysis to detect and track emotions, or claim the ability to predict whether someone will become a criminal based only on their facial features.

These systems leave no room for humanity, yet they define our daily lives. When I began rebuilding my life this summer, I painfully discovered that they have no time for people who have fallen off the grid — such nuance eludes them. I came out publicly as transgender and began hormone replacement therapy while in prison. When I was released, however, there was no quantifiable history of me existing as a trans woman. Credit and background checks automatically assumed I was committing fraud. My bank accounts were still under my old name, which legally no longer existed. For months I had to carry around a large folder containing my old ID and a copy of the court order declaring my name change. Even then, human clerks and bank tellers would sometimes see the discrepancy, shrug and say “the computer says no” while denying me access to my accounts.

Such programmatic, machine-driven thinking has become especially dangerous in the hands of governments and the police.

In recent years our military, law enforcement and intelligence agencies have merged in unexpected ways. They harvest more data than they can possibly manage, and wade through the quantifiable world side by side in vast, usually windowless buildings called fusion centers.

Such powerful new relationships have created a foundation for, and have breathed life into, a vast police and surveillance state. Advanced algorithms have made this possible on an unprecedented level. Relatively minor infractions, or “microcrimes,” can now be policed aggressively. And with national databases shared among governments and corporations, these minor incidents can follow you forever, even if the information is incorrect or lacking context.

At the same time, the United States military uses the metadata of countless communications for drone attacks, using pings emitted from cellphones to track and eliminate targets.

In literature and pop culture, concepts such as “thoughtcrime” and “precrime” have emerged out of dystopian fiction. They are used to restrict and punish anyone who is flagged by automated systems as a potential criminal or threat, even if a crime has yet to be committed. But this science fiction trope is quickly becoming reality. Predictive policing algorithms are already being used to create automated heat maps of future crimes, and like the “manual” policing that came before them, they overwhelmingly target poor and minority neighborhoods.

The world has become like an eerily banal dystopian novel. Things look the same on the surface, but they are not. With no apparent boundaries on how algorithms can use and abuse the data that’s being collected about us, the potential for it to control our lives is ever-growing.

Our drivers’ licenses, our keys, our debit and credit cards are all important parts of our lives. Even our social media accounts could soon become crucial components of being fully functional members of society. Now that we live in this world, we must figure out how to maintain our connection with society without surrendering to automated processes that we can neither see nor control.

ACLU: Amazon Needs To Get Out Of The Surveillance Business

“But wait,” you may start to say “I didn’t even know Amazon was even IN the surveillance business.”

Yeah. Neither did we. :-/

This is pretty much our worst fears realized: A huge corporation quietly implementing biased facial recognition software without any oversight from anyone.

Needless to say, this situation falls under the territory of our #EthicalAlgorithms mandate.

Here’s an ACLU Petition with links to more information:

Amazon: Get out of the surveillance business

(https://action.aclu.org/petition/amazon-stop-selling-surveillance)

We are still evaluating the documents and will be planning a specific strategy to deal with this situation – Aaron Swartz Day style :-

We have been making enormous progress on the Aaron Swartz Day Police Surveillance Project – which is a 100% successful experiment done in collaboration with the EFF, Oakland Privacy.net, cell phone privacy expert Daniel Rigmaiden and wonderful Muckrock.

The project provides letter templates to make it easy to ask your local police and sherriff’s departments what surveillance equipment they may have already purchased; they have to give you receipts and contracts if you guess correctly. (It’s like a little game show.)

So we are still in catch up mode at this time – but we are on the case. And we have many experts and technologists working to explain and expose the truth, before it’s too late.

If we can’t stop it from being implemented in the short term, perhaps we can develop technologies to stop it from functioning properly. While we are working out these issues in the courts, there is nothing saying we can’t share information and take defensive action. If you know techniques that folks should know about, email us at aaronswartzday [@] gmail.com

More on the situation from the New York Times.

New York Times: Amazon Pushes Facial Recognition to Police.

Sign the ACLU petition here.   More on this issue here.

The Ethical Algorithms  Panel & Track will be even more full than last year – at Aaron Swartz Day 2018 ‘s San Francisco Hackathon. We will have projects for you to hack on from afar. (Keeps your eyes right here for more information this week! :-) Pro publica story on Machine Bias here.

New York Times: Amazon Pushes Facial Recognition to Police.

By Nick Wingfield for the NY Times:

On Tuesday, the American Civil Liberties Union led a group of more than two dozen civil rights organizations that asked Amazon to stop selling its image recognition system, called Rekognition, to law enforcement. The group says that the police could use it to track protesters or others whom authorities deem suspicious, rather than limiting it to people committing crimes.

Here is the full text of the entire article – because, in our opinion, it is a clear cut case of Fair Use – being information that is clearly in the public interest (and should not be behind a paywall in the first place).

*****

By Nick Wingfield

May 22, 2018

SEATTLE — In late 2016, Amazon introduced a new online service that could help identify faces and other objects in images, offering it to anyone at a low cost through its giant cloud computing division, Amazon Web Services.

Not long after, it began pitching the technology to law enforcement agencies, saying the program could aid criminal investigations by recognizing suspects in photos and videos. It used a couple of early customers, like the Orlando Police Department in Florida and the Washington County Sheriff’s Office in Oregon, to encourage other officials to sign up.

But now that aggressive push is putting the giant tech company at the center of an increasingly heated debate around the role of facial recognition in law enforcement. Fans of the technology see a powerful new tool for catching criminals, but detractors see an instrument of mass surveillance.

On Tuesday, the American Civil Liberties Union led a group of more than two dozen civil rights organizations that asked Amazon to stop selling its image recognition system, called Rekognition, to law enforcement. The group says that the police could use it to track protesters or others whom authorities deem suspicious, rather than limiting it to people committing crimes.

Facial recognition is not new technology, but the organizations appear to be focusing on Amazon because of its prominence and what they see as a departure from the company’s oft-stated focus on customers.

“Amazon Rekognition is primed for abuse in the hands of governments,” the group said in the letter, which was addressed to Jeff Bezos, Amazon’s chief executive. “This product poses a grave threat to communities, including people of color and immigrants, and to the trust and respect Amazon has worked to build.”

With the letter, the A.C.L.U. released a collection of internal emails and other documents from law enforcement agencies in Washington County and Orlando that it obtained through open records requests. The correspondence between Amazon and law enforcement officials provides an unusual peek into the company’s ambitions with facial recognition tools, and how it has interacted with some of the officials using its products.

Many of the companies supplying the technology are security contractors little known to the public, but Amazon is one of the first major tech companies to actively market technology for conducting facial recognition to law enforcement. The efforts are still a tiny part of Amazon’s business, with the service one of dozens it offers through Amazon Web Services. But few companies have Amazon’s ability to effectively push widespread adoption of tech products.
EDITORS’ PICKS
The Bicultural Blackness of the Royal Wedding
How Trump’s Lawyer Built a Business Empire in the Shadows
Trump Team’s Infighting Thwarts Victory on China Trade
Image
Amazon’s campus in downtown Seattle. The American Civil Liberties Union and other civil rights groups are asking the company to stop selling its image-recognition system, Rekognition, to law enforcement authorities.CreditRuth Fremson/The New York Times

“The idea that a massive and highly resourced company like Amazon has moved decisively into this space could mark a sea change for this technology,” said Alvaro Bedoya, executive director at the Center on Privacy & Technology at the Georgetown University Law Center.

In a statement, a spokeswoman for Amazon Web Services stressed that the company offered a general image recognition technology that could automate the process of identifying people, objects and activities. She said amusement parks had used it to find lost children, and Sky News, the British broadcaster, used it last weekend to automatically identify guests attending the royal wedding. (The New York Times has also used the technology, including for the royal wedding.)

The spokeswoman said that, as with all A.W.S. services, the company requires customers to comply with the law.

The United States military and intelligence agencies have used facial recognition tools for years in overseas conflicts to identify possible terrorist suspects. But domestic law enforcement agencies are increasingly using the technology at home for more routine forms of policing.

The people who can be identified through facial recognition systems are not just those with criminal records. More than 130 million American adults are in facial recognition databases that can be searched in criminal investigations, the Center on Privacy & Technology at Georgetown Law estimates.

Facial recognition is showing up in new corners of public life all the time, often followed by challenges from critics about its efficacy as a security tool and its impact on privacy. Arenas are using it to screen for known troublemakers at events, while the Department of Homeland Security is using it to identify foreign visitors who overstay their visas at airports. And in China, facial recognition is ubiquitous, used to identify customers in stores and single out jaywalkers.

There are also concerns about the accuracy of facial recognition, with troubling variations based on gender and race. One study by the Massachusetts Institute of Technology showed that the gender of darker-skinned women was misidentified up to 35 percent of the time by facial recognition software.

“We have it being used in unaccountable ways and with no regulation,” said Malkia Cyril, executive director of the Center for Media Justice, a nonprofit civil rights organization that signed the A.C.L.U.’s letter to Amazon.

The documents the A.C.L.U. obtained from the Orlando Police Department show city officials considering using video analysis tools from Amazon with footage from surveillance cameras, body-worn cameras and drones.

Amazon may have gone a little far in describing what the technology can do. This month, it published a video of an Amazon official, Ranju Das, speaking at a company event in Seoul, South Korea, in which he said Orlando could even use Amazon’s Rekognition system to find the whereabouts of the mayor through cameras around the city.
Video from an Amazon event where a company official spoke about the company’s facial recognition system.CreditVideo by Amazon Web Services Korea

In a statement, a spokesman for the Orlando Police Department, Sgt. Eduardo Bernal, said the city was not using Amazon’s technology to track the location of elected officials in its jurisdiction, nor did it have plans to. He said the department was testing Amazon’s service now, but was not using it in investigations or public spaces.

“We are always looking for new solutions to further our ability to keep the residents and visitors of Orlando safe,” he said.

Early last year, the company began courting the Washington County Sheriff’s Office outside of Portland, Ore., eager to promote how it was using Amazon’s service for recognizing faces, emails obtained by the A.C.L.U. show. Chris Adzima, a systems analyst in the office, told Amazon officials that he fed about 300,000 images from the county’s mug shot database into Amazon’s system.

Within a week of going live, the system was used to identify and arrest a suspect who stole more than $5,000 from local stores, he said, adding there were no leads before the system identified him. The technology was also cheap, costing just a few dollars a month after a setup fee of around $400.

Mr. Adzima ended up writing a blog post for Amazon about how the sheriff’s office was using Rekognition. He spoke at one of the company’s technical conferences, and local media began reporting on their efforts. After the attention, other law enforcement agencies in Oregon, Arizona and California began to reach to Washington County to learn more about how it was using Amazon’s system, emails show.

In February of last year, before the publicity wave, Mr. Adzima told an Amazon representative in an email that the county’s lawyer was worried the public might believe “that we are constantly checking faces from everything, kind of a Big Brother vibe.”

“They are concerned that A.C.L.U. might consider this the government getting in bed with big data,” Mr. Adzima said in an email. He did not respond to a request for comment for this article.

Deputy Jeff Talbot, a spokesman for the Washington County Sheriff’s Office, said Amazon’s facial recognition system was not being used for mass surveillance by the office. The company has a policy to use the technology only to identify a suspect in a criminal investigation, he said, and has no plans to use it with footage from body cameras or real-time surveillance systems.

“We are aware of those privacy concerns,” he said. “That’s why we have a policy drafted and why we’ve tried to educate the public about what we do and don’t do.”

Chelsea Manning, Caroline Sinders, and Kristian Lum: “Technologists, It’s Time to Decide Where You Stand On Ethics”

(Left to Right) Kristian Lum, Caroline Sinders, Chelsea Manning.

A lot of folks were wondering about what Chelsea Manning‘ meant when she discussed a “Code of Ethics” during her SXSW talk, last March. Well there’s no need to wonder, because Chelsea discussed this in detail, with her co-panelists Kristian Lum (Human Rights Data Analysis Group) and Caroline Sinders (Wikimedia Foundation), during the Ethical Algorithms track at the last Aaron Swartz Day at the Internet Archive.

Chelsea Manning, Caroline Sinders, and Kristian Lum: “Technologists, It’s Time to Decide Where You Stand On Ethics”

By Lisa Rein for Mondo 2000.

Link to the complete video for Ethical Algorithms panel.

Chelsea Manning

Chelsea Manning: Me personally, I think that we in technology have a responsibility to make our own decisions in the workplace – wherever that might be. And to communicate with each other, share notes, talk to each other, and really think – take a moment – and think about what you are doing. What are you doing? Are you helping? Are you harming things? Is it worth it? Is this really what you want to be doing? Are deadlines being prioritized over – good results? Should we do something? I certainly made a decision in my own life to do something. It’s going to be different for every person. But you really need to make your own decision as to what to do, and you don’t have to act individually.

Kristian Lum and Caroline Sinders.

Caroline Sinders: Even if you feel like a cog in the machine, as a technologist, you aren’t. There are a lot of people like you trying to protest the systems you’re in. Especially in the past year, we’ve heard rumors of widespread groups and meetings of people inside of Facebook, inside of Google, really talking about the ramifications of the U.S. Presidential election, of questioning, “how did this happen inside these platforms?” – of wanting there even to be accountability inside of their own companies. I think it’s really important for us to think about that for a second. That that’s happening right now. That people are starting to organize. That they are starting to ask questions.

Aaron Swartz Ceramic Statue (by Nuala Creed) and Kristian Lum.

Kristen Lum: There are a lot of models now predicting whether an individual will be re-arrested in the future. Here’s a question: What counts as a “re-arrest?” Say someone fails to appear for court and a bench warrant is issued, and then they are arrested. Should that count? So I don’t see a whole lot of conversation about this data munging.

Read the whole thing here. Watch the whole video here.

See all the Aaron Swartz Day 2017 videos here with the New Complete Speaker Index!

Thanks to ThoughtWorks for sponsoring the Ethical Algorithms Track at Aaron Swartz Day 2017. This track has also led to the launch of our Aaron Swartz Day Police Surveillance Project, and we have lots to tell you all about it, very soon :-)

Artificial General Intelligences (AGIs) & Corporations Seminar at the Internet Archive Tomorrow (Sunday)

Note: if you can’t make this event, check out this literature review and this paper, which will still give you good idea of some of the subject matter :)

When: Sunday, April 8, 2018
Where: The Internet Archive, 300 Funston Ave, San Francisco, CA
Time: 2-6pm

Artificial General Intelligences & Corporations

Description:

Even if we don’t know yet how to align Artificial General Intelligences with our goals, we do have experience in aligning organizations with our goals. Some argue corporations are in fact Artificial Intelligences – legally at least we treat them as persons already.

The Foresight Institute, along with the Internet Archive, invite you to spend an afternoon examining AI alignment, especially whether our interactions with different types of organizations, e.g. our treatment of corporations as persons, allow insights into how to align AI goals with human goals.

While this meeting focuses on AI safety, it merges AI safety, philosophy, computer security, and law and should be highly relevant for anyone working in or interested in those areas.

Why this is really really important:

As we learned during last year’s Ethical Algorithms panel, there are many different ways that unchecked black box algorithms are being used against citizens daily.

This kind of software can literally ruin a person’s life, through no fault of their own – especially if they are already being discriminated against or profiled unfairly in some way in real life. This is because the algorithms tend to amplify and exaggerate any biases that already occur in the data being fed into the system (that it “learns” on).

Algorithms are just one of many tools that an an AGI (Artificial General Intelligence) might use in the course of its daily activities on behalf of whatever Corporation for which it operates.

The danger lies in the potential for misinterpretation by these AGIs should they be making decisions based on the faulty interpretations of unchecked black box algorithmic calculations.  For this reason, preservation of and public access to the original data sets used to train these algorithms is of paramount importance. And currently, that just isn’t the case.

The promise of AGIs is downright exciting, but how do we ensure that corporate-driven AGIs do not gain unruly control over public systems?

Arguably, corporations are already given too many rights – those rivaling or surpassing those of actual humans, at this point.

What happens when these Corporate “persons” have AGIs out in the world, interacting with live humans and other AGIs, on a constant basis. (AGIs never sleep.) How many tasks could your AGI do for you while you sleep at night? What instructions would you give your AGI? And whose “fault” is it when the goals of an AGI conflict with those of a living person?

Joi Ito, the Director of the MIT Media Lab, wrote a piece for the ACLU this week, concluding that AI Engineers Must Open Their Designs to Democratic Control  -“The internet, artificial intelligence, genetic engineering, crypto-currencies, and other technologies are providing us with ever more tools to change the world around us. But there is a cost. We’re now awakening to the implications that many of these technologies have for individuals and society…

AI is now making decisions for judges about the risks that someone accused of a crime will violate the terms of his pretrial probation, even though a growing body of research has shown flaws in such decisions made by machines,” he writes. “A significant problem is that any biases or errors in the data the engineers used to teach the machine will result in outcomes that reflect those biases

Joi explains that the researchers at the M.I.T. Media Lab, have been starting to refer to these technologies as “extended intelligence” rather than “artificial intelligence.” “The term “extended intelligence” better reflects the expanding relationship between humans and society, on the one hand, and technologies like AI, blockchain, and genetic engineering on the other. Think of it as the principle of bringing society or humans into the loop,” he explains.

Sunday’s seminar will discuss all of these ideas and more, working towards a concept called “AI Alignment” – where the Corporate-controlled AGIs and humans work toward shared goals.

The problem is that almost all of the AGIs being developed are, in fact, some form of corporate AGI.

That’s why a group of AGI scientists founded OpenCog, to provide a framework that anyone can use.

Aaron Swartz Day is working with OpenCog on building an in-world robot concierge for our VR Destination, and we will be discussing and teaching about the privacy and security considerations of AGI and VR in an educational area within the museum – and of course on this website :-). Also #AGIEthics will be a hackathon track this year, along with #EthicalAlgorithms :-)

So! If this is all interesting to you – PLEASE come on Sunday :-) !

There will also be an Aaron Swartz Day planning meeting –> way early this year –> because really we never stopped working on the projects from last November –> you are gonna love it! –> The meeting is at the Internet Archive on May 23, 2018 at 6pm. There will be an RSVP soon – but save the date! :-)

More on that soon! :)

References

  1.  AGI and Corporations Seminar, Internet Archive & Foresight Institute, April 8, 2018
  2. AI Engineers Must Open Their Designs to Democratic Control , by Joi Ito for the ACLU. April 2, 2018
  3. Machine Bias – There’s software used across the country to predict future criminals. And it’s biased against blacks. by Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, ProPublica, May 23, 2016
  4. The OpenCog Foundation – Building Better AGI Minds Together
  5. The Swartz-Manning VR Destination, An Aaron Swartz Day Op
  6. The Algorithmic Justice League
  7. Gendershades.org

 

 

Chelsea Manning to Technologists: Please Take the Time To Contemplate Your System’s Potential Misuse

Chelsea Manning will be speaking at the Fifth Annual Aaron Swartz Day Evening Event – Saturday, November 4, 2017 – 7:30 pm – TICKETS (Just going to the hackathon? It’s free.)

Chelsea E. Manning at Dolores Park in San Francisco, September, 2017.

From October 8, 2017, in New York City (at the New Yorker Festival):

I think the most important think that we have to learn, because I think it’s been forgotten, is that every single one of us has the ability to change things. Each and every one of us has this ability. We need to look to each other and realize our values are what we care about, and then assert them, and say these things, and to take actions in our political discourse to make that happen. Because it’s not going to happen at the Ballot Box. It’s not.

Make your own decisions. Make your own choices. Make your own judgement.

You have to pay attention. For engineers in particular. We design and we develop systems, but the systems that we develop can be used for different things. The software that I was using in Iraq for predictive analysis was the same that you would use in marketing. It’s the same tools. It’s the same analysis. I believe engineers and software engineers and technologists. (That’s a new term that came out while I was away :-)

I guess technologists should realize that we have an ethical obligation to make decisions that go beyond just meeting deadlines or creating a product. What actually takes some chunks of time is to say “what are the consequences of this system?” “How can this be used?” “How can this be misused?” Let’s try to figure out how we can mitigate a software system from being misused. Or decide whether you want to implement it at all. There are systems where, if misused, could be very dangerous. — Chelsea E. Manning, October 8, 2017.

Excerpt from the WNYC The New Yorker Radio Hour (starts at 31:45):
http://www.wnyc.org/story/chelsea-manning-life-after-prison/

About the Ethical Algorithms Panel and Technology Track

This panel is part of the San Francisco Aaron Swartz Day Hackathon. Admission is FREE.

See Caroline Sinders and Kristian Lum, live at 2pm, on November 4th.

Technology Track – Ethical Algorithms
2:00 – 2:45 pm – Ethical Algorithms Panel – w/Q and A.
Kristian Lum (Human Rights Data Analysis Group – HRDAG) As the Lead Statistician at HRDAG, Kristian’s research focus has been on furthering HRDAG’s statistical methodology (population estimation or multiple systems estimation—with a particular emphasis on Bayesian methods and model averaging).
Caroline Sinders (Wikimedia Foundation) – Caroline uses machine learning to address online harassment at Wikimedia, and before that, she helped design and market IBM’s Watson. Caroline was also just named as one of Forbes’ 8 AI Designers You Need to Know.” Plus Special guests TBA

About the Ethical Algorithms Panel and Technology Track
by Lisa Rein, Co-founder, Aaron Swartz Day

I created this track based on my phone conversations with Chelsea Manning on this topic.

Chelsea was an Intelligence Analyst for the Army and used algorithms in the day to day duties of her job. She and I have been discussing algorithms, and their ethical implications, since the very first day we spoke on the phone, back in October 2015.

Chelsea recently published as a New York Times Op-Ed on the subject: The Dystopia We Signed Up For.

From the Op-Ed:

“The consequences of our being subjected to constant algorithmic scrutiny are often unclear… algorithms are already analyzing social media habits, determining credit worthiness, deciding which job candidates get called in for an interview and judging whether criminal defendants should be released on bail. Other machine-learning systems use automated facial analysis to detect and track emotions, or claim the ability to predict whether someone will become a criminal based only on their facial features. These systems leave no room for humanity, yet they define our daily lives.”

A few weeks later, in December, I went to the Human Rights Data Analysis Group (HRDAG) holiday party, and met HRDAG’s Executive Director, Megan Price. She explained a great deal to me about the predictive software used by the Chicago police, and how it was predicting crime in the wrong neighborhoods based on the biased data it was getting from meatspace. Meaning, the data itself was “good” in that it was accurate, but unfortunately, the actual less-than-desirable behavior by the Chicago PD was being used as a guide for sending officers out into the field. Basically the existing bad behavior of the Chicago PD was being used to assign future behavior.

This came as a revelation to me. Here we have a chance to stop the cycle of bad behavior, by using technology to predict where the next real crime may occur, but instead, we have chosen to memorialize the faulty techniques used in the past into software, to be used forever.

I have gradually come to understand that, although these algorithms are being used in all aspects of our lives, it is not often clear how or why they are working. Now, it has become clear that they can develop their own biases, based on the data they have been given to “learn” from. Often the origin of that “learning data” is not shared with the public.

I’m not saying that we have to understand how exactly every useful algorithm works; which I understand would be next to impossible, but I’m not sure a completely “black box” approach is best at least when the public, public data, and public safety are involved. (Thomas Hargrove’s Murder Accountability Project‘s “open” database is one example of a transparent approach that seems to be doing good things.)

There also appears to be a disconnect with law enforcement, while some precincts seem to be content to rely on on technology for direction, for better or worse, such as the predictive software used by the Chicago Police Department. In other situations, such Thomas Hargrove’s, “Murder Accountability Project” (featured in the article Murder He Calculated) technologists are having a hard time getting law enforcement to take these tools seriously. Even when these tools appear to have the potential to find killers, there appear to be numerous invisible hurdles in the way of any kind of a timely implementation. Even for these “life and death” cases, Hargrove has had a very hard time getting anyone to listen to him.

So, how do we convince law enforcement to do more with some data while we are, at the same time, concerned about the oversharing other forms of public data?

I find myself wondering what can even be done, if simple requests such as “make the NCIC database’s data for unsolved killings searchable” seem to be falling on deaf ears.

I am hoping to have some actual action items that can be followed up on in the months to come, as a result of this panel.

References:

1. The Dystopia We Signed Up For, Op-Ed by Chelsea Manning, New York Times, September 16, 2017. (Link goes to a free version not behind a paywall, at Op-Ed News)

2. Pitfalls of Predictive Policing, by Jessica Saunders for Rand Corporation, October 11, 2016. https://www.rand.org/blog/2016/10/pitfalls-of-predictive-policing.html

3. Predictions put into practice: a quasi-experimental evaluation of Chicago’s predictive policing pilot. by Jessica Saunders, Priscillia Hunt, John S. Hollywood, for the Journal of Experimental Criminology, August 12, 2016. https://link.springer.com/article/10.1007/s11292-016-9272-0

4. Murder He Calculated – by Robert Kolker, for Bloomberg.com, February 12th 2017.

5. Murder Accountability Project, founded by Thomas Hargrove. http://www.murderdata.org/

6. Secret Algorithms Are Deciding Criminal Trials and We’re Not Even Allowed to Test Their Accuracy – By Vera Eidelman, William J. Brennan Fellow, ACLU Speech, Privacy, and Technology Project, September 15, 2017. https://www.aclu.org/blog/privacy-technology/surveillance-technologies/secret-algorithms-are-deciding-criminal-trials-and

7. Machine Bias – There’s software used across the country to predict future criminals. And it’s biased against blacks. by Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, ProPublica, May 23, 2016. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

8. Criminality Is Not A Nail – A new paper uses flawed methods to predict likely criminals based on their facial features. by Katherine Bailey for Medium.com, November 29, 2016. https://medium.com/backchannel/put-away-your-machine-learning-hammer-criminality-is-not-a-nail-1309c84bb899

Saturday, November 4, 2017
2:00 – 2:45 pm – Ethical Algorithms Panel – w/Q and A.
Kristian Lum (Human Rights Data Analysis Group – HRDAG)
As the Lead Statistician at HRDAG, Kristian’s research focus has been on furthering HRDAG’s statistical methodology (population estimation or multiple systems estimation—with a particular emphasis on Bayesian methods and model averaging).
Caroline Sinders (Wikimedia Foundation)
Caroline uses machine learning to address online harassment at Wikimedia, and before that, she helped design and market IBM’s Watson. Caroline was also just named as one of Forbes’ 8 AI Designers You Need to Know.” Plus Special guests TBA

 

Caroline Sinders Named By Forbes as an “AI Designer That You Need To Know”

See Caroline Sinders at this year’s Aaron Swartz Day International Hackathon, at the San Francisco Hackathon‘s Ethical Algorithm Panel, Saturday at 2pm, and at the evening event, Saturday night, November 4, 7:30 pm.

8 AI Designers That You Need To Know by Adelyn Zhou for Forbes.

Caroline Sinders – Machine Learning Designer and Researcher, former Interaction Designer for IBM Watson

Caroline Sinders Caroline Sinders

Caroline is an artist, designer, and activist who also loves writing codes. She helped design and market IBM Watson, a billion-dollar artificial intelligence system built on advanced natural language processing, automated reasoning, machine learning, and other technologies. Sinders’ work on Watson focused on user flows and the impact of human decision-making in the development of robotics software. She recently left her dream job at IBM to pursue an equally challenging fellowship at Open Labs. A passionate crusader against online harassment, Caroline probes the different ways design can influence and shape digital conversations, with the ultimate goal of using machine learning to address online harassment. You can weigh her strong opinions on Twitter, Medium, LinkedIn, and her personal website.