Quantcast
Channel: the charles smith blog
Viewing all articles
Browse latest Browse all 8447

Technology series: Part two: Do you really believe that a robot can know you are lying and that scientists have created an AI (artificial intelligence) that can detect deception in the courtroom that is already 'significantly better' than humans? Last word - at least 'last two initials' - to Mark Godsey of the Wrongful Convictions Blog.

Next: Technology series: Part Three: Artificial intelligence (AI) is evolving very quickly. But can it outpace the biases of its creators, humans.Kate Crawford, a Microsoft researcher and co-founder of AI Now, a research institute studying the social impact of artificial intelligence, thinks not. Freelance writer and speaker Sidney Fussell describes her "incredible keynote speech" entitled “The Trouble with Bias,” on Gizmodo..."“An allocative harm is when a system allocates or withholds a certain opportunity or resource,” she began. It’s when AI is used to make a certain decision, let’s say mortgage applications, but unfairly or erroneously denies them to a certain group. She offered the hypothetical example of a bank’s AI continually denying mortgage applications to women. She then offered a startling real world example: a risk assessment AI routinely found that black criminals were a higher risk than white criminals. (Black criminals were referred to pre-trial detention more often because of this decision.) Representation harms “occur when systems reinforce the subordination of some groups along the lines of identity,” she said—essentially, when technology reinforces stereotypes or diminishes specific groups. “This sort of harm can take place regardless of whether resources are being withheld.”
Previous: Technology series: Part One: Software Is Deciding How Long People Spend in Jail,' Truthdig reports. But is it doing a good job?..."A 2016 ProPublica study found that COMPAS is “particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.” The analysis also determined that white offenders were wrongly given particularly low scores that were poor predictors of their real rates of recidivism. Ellora Thadaney Israni, a former software engineer and current Harvard Law student, notes that without constant corrective upkeep to make AI programs like COMPAS unlearn their bigotry, those biases tend to be further compounded. “The computer is worse than the human,” Israni writes at the New York Times. “It is not simply parroting back to us our own biases, it is exacerbating them.” -
$
0
0

PUBLISHER'S NOTE: Several recent articles have predicted that computer technology is on the way to  becoming a valuable tool for detecting lies. Take"Will augmented reality make lying obsolete? Honestly, the biggest culture-changing application for augmented reality will be always-on lie detection," by contributing columnist Mike Elgan, published by Computerworld on December 16, 2017.  "Some 35 years ago, late-night talk show host Johnny Carson imagined what it would be like if politicians were hooked up to lie detectors.) Soon, you won’t have to imagine it. There will be an app for that. Old-fashioned lie detectors, called polygraphs, track blood pressure, breathing and other physiological metrics to gauge stress levels during questioning. The administrator of a polygraph asks questions to determine a baseline response, then watches for signs of stress with additional questioning. Polygraphs are unreliable and controversial. They have to be administered by an expert using expensive equipment in a controlled environment. Even then, the results are not admissible as evidence in court in the U.S. and the U.K. But the future of lie detection is A.I. A.I. can take various “signals,” such as eye movements, facial gestures, body movements, voice intonations and others, to estimate the truthfulness of a person’s statements."

https://www.computerworld.com/article/3243049/artificial-intelligence/will-augmented-reality-make-lying-obsolete.html

Then there's The Daily Mail story: 'The robot that knows when you're lying: Scientists create an AI that can detect deception in the courtroom (and it's already 'significantly better' than humans)...      "The system, called DARE, was trained by watching 15 videos of people in court     It was trained recognised five expressions that indicate someone is lying     These are frowning, raised eyebrows, lips turning up, lips protruded and head tilt     In a final test, the system performed with 92 per cent accuracy. The researchers describe this performance as 'significantly better' than humans."

http://www.dailymail.co.uk/sciencetech/article-5197747/AI-detects-expressions-tell-people-lie-court.html#ixzz52NPR9LbK

I prefer to leave the last word to Mark Godsey of The Wrongful Conviction Blog, in reaction to the Daily Mail piece. "Posted onby|Leave a comment
Looks at this. It appears to be based on the premise that certain facial movements definitely indicate lying in all humans. That is a faulty premise. The robot is 92% accurate at picking up those facial expressions, which the manufacturer equates with 92% accuracy in lie detection. I call BS!"
#BlindInjusticeChapter6BlindIntuition.

https://wrongfulconvictionsblog.org/2017/12/22/the-robot-that-knows-when-youre-lying-scientists-create-an-ai-that-can-detect-deception-in-the-courtroom/

In truth, I think Mark has got it right on. Long live The Wrongful Conviction Blog.

Harold Levy: Publisher; The Charles Smith Blog;

Viewing all articles
Browse latest Browse all 8447

Trending Articles