hey hey hey! at the next D.R.E.A.M. (Data Rules Everything Around Me) this Thursday 6pm at ICNC/MakeCity (320 N Damen Ave) we’re gonna be talking about Bias in AI.
if u’ve been researching AI + specifically Machine Learning these days i’m sure u’re familiar with the bias issue. AI doesn’t just control our netflix recommendations && social media feeds (even there, pause for concern O_O) but is also being used in criminal justice to sentence folks, or used byemployers to hire/interview new employees. These days all sorts of institutions are leveraging AI tools: insurance companies, universities, hospitals && of course advertising. This brings up all kinds of important ethical questions, how does the human bias (in the dataset? in a neural net’s architecture?) enter the picture? is there transparency around the bias embended in an ML API? what about transparency around how/where AI is being used? who gets to wield the power of AI?
these are big questions… i know, but fortunately we’ve got just the person to help us tackle these! Margaret Mitchell leads the Ethical Artificial Intelligence team in Google’s Research & Machine Intelligence group. She’ll be flying in this week to chat through all these big questions w/us + talk about real/practical solutions she’s been working on!
general info http://dream.netizen.org/