Tag Archives: AGI Ethics

Artificial General Intelligences (AGIs) & Corporations Seminar at the Internet Archive Tomorrow (Sunday)

Note: if you can’t make this event, check out this literature review and this paper, which will still give you good idea of some of the subject matter :)

When: Sunday, April 8, 2018
Where: The Internet Archive, 300 Funston Ave, San Francisco, CA
Time: 2-6pm

Artificial General Intelligences & Corporations

Description:

Even if we don’t know yet how to align Artificial General Intelligences with our goals, we do have experience in aligning organizations with our goals. Some argue corporations are in fact Artificial Intelligences – legally at least we treat them as persons already.

The Foresight Institute, along with the Internet Archive, invite you to spend an afternoon examining AI alignment, especially whether our interactions with different types of organizations, e.g. our treatment of corporations as persons, allow insights into how to align AI goals with human goals.

While this meeting focuses on AI safety, it merges AI safety, philosophy, computer security, and law and should be highly relevant for anyone working in or interested in those areas.

Why this is really really important:

As we learned during last year’s Ethical Algorithms panel, there are many different ways that unchecked black box algorithms are being used against citizens daily.

This kind of software can literally ruin a person’s life, through no fault of their own – especially if they are already being discriminated against or profiled unfairly in some way in real life. This is because the algorithms tend to amplify and exaggerate any biases that already occur in the data being fed into the system (that it “learns” on).

Algorithms are just one of many tools that an an AGI (Artificial General Intelligence) might use in the course of its daily activities on behalf of whatever Corporation for which it operates.

The danger lies in the potential for misinterpretation by these AGIs should they be making decisions based on the faulty interpretations of unchecked black box algorithmic calculations.  For this reason, preservation of and public access to the original data sets used to train these algorithms is of paramount importance. And currently, that just isn’t the case.

The promise of AGIs is downright exciting, but how do we ensure that corporate-driven AGIs do not gain unruly control over public systems?

Arguably, corporations are already given too many rights – those rivaling or surpassing those of actual humans, at this point.

What happens when these Corporate “persons” have AGIs out in the world, interacting with live humans and other AGIs, on a constant basis. (AGIs never sleep.) How many tasks could your AGI do for you while you sleep at night? What instructions would you give your AGI? And whose “fault” is it when the goals of an AGI conflict with those of a living person?

Joi Ito, the Director of the MIT Media Lab, wrote a piece for the ACLU this week, concluding that AI Engineers Must Open Their Designs to Democratic Control  -“The internet, artificial intelligence, genetic engineering, crypto-currencies, and other technologies are providing us with ever more tools to change the world around us. But there is a cost. We’re now awakening to the implications that many of these technologies have for individuals and society…

AI is now making decisions for judges about the risks that someone accused of a crime will violate the terms of his pretrial probation, even though a growing body of research has shown flaws in such decisions made by machines,” he writes. “A significant problem is that any biases or errors in the data the engineers used to teach the machine will result in outcomes that reflect those biases

Joi explains that the researchers at the M.I.T. Media Lab, have been starting to refer to these technologies as “extended intelligence” rather than “artificial intelligence.” “The term “extended intelligence” better reflects the expanding relationship between humans and society, on the one hand, and technologies like AI, blockchain, and genetic engineering on the other. Think of it as the principle of bringing society or humans into the loop,” he explains.

Sunday’s seminar will discuss all of these ideas and more, working towards a concept called “AI Alignment” – where the Corporate-controlled AGIs and humans work toward shared goals.

The problem is that almost all of the AGIs being developed are, in fact, some form of corporate AGI.

That’s why a group of AGI scientists founded OpenCog, to provide a framework that anyone can use.

Aaron Swartz Day is working with OpenCog on building an in-world robot concierge for our VR Destination, and we will be discussing and teaching about the privacy and security considerations of AGI and VR in an educational area within the museum – and of course on this website :-). Also #AGIEthics will be a hackathon track this year, along with #EthicalAlgorithms :-)

So! If this is all interesting to you – PLEASE come on Sunday :-) !

There will also be an Aaron Swartz Day planning meeting –> way early this year –> because really we never stopped working on the projects from last November –> you are gonna love it! –> The meeting is at the Internet Archive on May 23, 2018 at 6pm. There will be an RSVP soon – but save the date! :-)

More on that soon! :)

References

  1.  AGI and Corporations Seminar, Internet Archive & Foresight Institute, April 8, 2018
  2. AI Engineers Must Open Their Designs to Democratic Control , by Joi Ito for the ACLU. April 2, 2018
  3. Machine Bias – There’s software used across the country to predict future criminals. And it’s biased against blacks. by Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, ProPublica, May 23, 2016
  4. The OpenCog Foundation – Building Better AGI Minds Together
  5. The Swartz-Manning VR Destination, An Aaron Swartz Day Op
  6. The Algorithmic Justice League
  7. Gendershades.org

 

 

Big Weekend At The Internet Archive – John Perry Barlow Symposium and AGI & Corporations Seminar

We wanted to make sure that everyone knew about two important events happening this weekend at the Internet Archive.

The first is a John Perry Barlow Symposium from 2pm-6pm – which we think might actually be better than an old fashioned memorial :-)

THIS EVENT IS SOLD OUT, BUT YOU CAN WATCH THE LIVESTREAM HERE: https://eff.org/ArchiveYouTube

Much like Aaron Swartz Day, rather than simply mourn the loss of our friend, the Internet Archive is taking this opportunity to discuss and promote many of the ideals that John Perry stood for, with some speakers that knew him well, so that we can carry on working in his name:

Edward Snowden, noted whistleblower and President of Freedom of the Press Foundation

Cindy Cohn, Executive Director of the Electronic Frontier Foundation

Cory Doctorow, celebrated scifi author and Editor in Chief of Boing Boing

Joi Ito, Director of the MIT Media Lab

John Gilmore, EFF Co-founder, Board Member, entrepreneur and technologist

Trevor Timm, Executive Director of Freedom of the Press

Shari Steele, Executive Director of the Tor Foundation and former EFF Executive Director

Mitch Kapor, Co-founder of EFF and Co-chair of the Kapor Center for Social Impact

Pam Samuelson, Richard M. Sherman Distinguished Professor of Law and Information at the University of California, Berkeley

Steven Levy, Wired Senior Writer, and author of Hackers, In the Plex, and other books

Amelia Barlow, daughter of John Perry Barlow

See you there!

Then on Sunday, the Internet Archive is putting on a seminar on AGI (Artificial General Intelligences) & Corporations that is as frightening and interesting as it sounds.

This ties in to this year’s Aaron Swartz Day event, where we have an “AGI Ethics” track this year. (The Ethical Algorithms track is sticking around too.)

About the event:

Even if we don’t know yet how to align Artificial General Intelligences with our goals, we do have experience in aligning organizations with our goals. Some argue corporations are in fact Artificial Intelligences – legally at least we treat them as persons already.

The Foresight Institute, along with the Internet Archive, invite you to spend an afternoon examining AI alignment, especially whether our interactions with different types of organizations, e.g. our treatment of corporations as persons, allow insights into how to align AI goals with human goals.

While this meeting focuses on AI safety, it merges AI safety, philosophy, computer security, and law and should be highly relevant for anyone working in or interested in those areas.

Tickets – $10

See you there too! :-)