今日推薦開源項目:《說些有意思的 geeksay》
今日推薦英文原文:《An Algorithm May Decide Who Gets Suicide Prevention》
今日推薦開源項目:《說些有意思的 geeksay》傳送門:GitHub鏈接
推薦理由:有的時候一個意思可以用多種不同的詞語表達。設想一下,如果用 || 代表 or,&& 代表 and,self 代替 me……雖然圈子裡的人能夠聽懂,但是沒接觸過這方面的人可能就雲里霧裡了。最好不要用它和不懂這些的人對話,話雖如此,偶爾使用這樣的說話方法和聽得懂它們的人對話倒也是個找樂子的好主意。
今日推薦英文原文:《An Algorithm May Decide Who Gets Suicide Prevention》作者:Jake Pitre
原文鏈接:https://onezero.medium.com/an-algorithm-may-decide-who-gets-suicide-prevention-f46e8f7055c1
推薦理由:人工智慧和它的演算法的確在更好的幫助人類,但是演算法偏見正在阻礙它——這會讓它比人類更快更多的犯下錯誤
An Algorithm May Decide Who Gets Suicide Prevention
A recent study on Google』s search results raises questions about the faith we put in algorithms — and the tech companies that use them
anyone unfamiliar with the suffocating feeling of suicidal ideation, the significance of a phone call with a stranger might seem dubious. In that moment, the lowest of lows, speaking with someone who understands what you』re going through can — quite literally — mean the difference between life and death.While it remains under debate just how effective suicide hotlines are in terms of preventing suicides, there is plenty of anecdotal evidence that lives have been saved by their existence. Based on that evidence, in 2011, Google began to show suicide helpline numbers at the top of results for searches like 「effective suicide methods.」
Like so much of our online lives, the decision to show this advice alongside results is determined by an algorithm. Facebook has been doing something similar since 2017, when it began monitoring posts for content about suicide or self-harm with pattern recognition algorithms, and then sending those users relevant resources.
Yet, while these steps are helpful, the algorithms concerned do not perform consistently across the world, according to a study published earlier this year in the journal New Media & Society. The researchers — Sebastian Scherr at the University of Leuven and Mario Haim and Florian Arendt at the University of Munich — found that algorithmic bias is increasingly becoming a challenge for technological growth as algorithm creators struggle to confront the limitations of their programming. In this case, they found that Google』s algorithms contribute to a digital divide in access to health information online.
An algorithm, it seems, could determine, in some cases, who gets shown lifesaving information, and who doesn』t.
The researchers behind the New Media & Society paper set out to understand this odd quirk of Google』s algorithm, and to find out why the company seemed to be serving some markets better than others. They developed a list of 28 keywords and phrases related to suicide, Scherr says, and worked with nine researchers from different countries who accurately translated those terms into their own languages. For 21 days, they conducted millions of automated searches for these phrases, and kept track of whether hotline information showed up or not.
「If you』re in an English-speaking country, you have over a 90% chance of seeing these results — but Google operates differently depending on which language you use.」They thought these results might simply, logically, show up in countries with higher suicide rates, but the opposite was true. Users in South Korea, which has one of the world』s highest suicide rates, were only served the advice box about 20% of the time. They tested different browser histories (some completely clean, some full of suicide-related topics), with computers old and new, and tested searches in 11 different countries.
It didn』t seem to matter: the advice box was simply much more likely to be shown to people using Google in the English language, particularly in English-speaking countries (though not in Canada, which Scherr speculates was probably down to geographical rollout). 「If you』re in an English-speaking country, you have over a 90% chance of seeing these results — but Google operates differently depending on which language you use,」 he said. Scherr speculates that using keywords may simply have been the easiest way to implement the project, but adds that it wouldn』t take much to offer it more effectively in other countries, too.
A Google spokesperson, who asked not to be quoted directly, said that the company is refining these algorithms. The advice boxes require the cooperation of local organizations which may not always be available, they said, but that relevant resources will still show up in regular search results. Google said the service does not have comprehensive global coverage, and while it is actively working on new languages and locations, rolling that out takes time.
Suicide prevention might not have the same status as Google』s high profile artificial intelligence projects, but it is nonetheless a feature that could potentially help save lives. Some 800,000 people take their own lives each year around the world, or about one person every 40 seconds. 「There』s a lot at stake if you don』t provide the same chances to everyone,」 Scherr told me. 「Google is a global company, with smart people all over the world. They shouldn』t be forgetting about the rest of the world. They could fix this. They could do that today.」
Algorithms have increasing influence over every aspect of our lives, from the triviality of Netflix recommendations to determining something as profound as suicide prevention. In 2018, a secret Amazon recruitment algorithm analyzed the resumes of job applicants, but was found to favor male candidates over female ones. And in 2016, ProPublica revealed that one risk assessment algorithm used in many courtrooms routinely recommended harsher sentences to black people than to white offenders — a disturbing reminder about the power we』ve allowed algorithms to wield, even as many of us assume that computers are inherently more reliable than human beings.
「A.I. makes mistakes, only faster and at scale,」 explains Kirsten Martin, a researcher at George Washington University』s School of Business and an expert in A.I. ethics. 「There has to be an assumption that mistakes will be made like a human, but at a faster pace. That should be a standard governing principle.」
「Although Google seems to be attempting to perform a social good by expanding its suicide prevention algorithms, 『why aren』t they doing that all the time?』」Martin says we seem to be letting algorithms run rampant before anyone truly understands what they』re doing. 「Just because you ignore the ethics of a decision doesn』t mean it goes away — it just means you do it badly,」 Martin told me. 「Google』s suicide result box is interesting because they have a good goal in mind, and they』re at least conscious of the fact that they are perhaps saving people in one group and not others.」
Martin warns that although Google seems to be attempting to perform a social good by expanding its suicide prevention algorithms, it raises an important question. 「Why aren』t they doing that all the time? Why aren』t they being that thoughtful about, say, advertising? They』re admitting that they can do it right when they want to, so it』s a shame they aren』t using that principle across all their algorithms.」
Google, for its part, has made its A.I. principles public, though they mostly consist of vague aphorisms like: 「Be accountable to people.」 The company also recently announced an internal A.I. ethics council, which was heavily criticized for its makeup and then cancelled after just one week.
「We care deeply that A.I. is a force for good in the world, and that it is applied ethically, and to problems that are beneficial to society,」 the company said in a January research summary. The report highlighted the company』s Google A.I. for Social Impact Challenge, with channels $25 million in funding to groups interested in using Google artificial intelligence. One of those grants went to The Trevor Project, an LGBTQ+ organization that intends to use machine learning to help determine suicide risk levels.
As these tech companies continue to wrestle with this issue and their responsibility to address it, and continue to utilize algorithms to handle it, perhaps it』s worth remembering that all this technological effort has one simple goal — to make it easier for one person in need to hear the voice of someone who will listen.
下載開源日報APP:https://openingsource.org/2579/
加入我們:https://openingsource.org/about/join/
關注我們:https://openingsource.org/about/love/