今日推荐英文原文：《Google scientists reportedly told to make AI look more ‘positive’ in research papers》
推荐理由：BiliBiliTool不仅可以自动完成每日任务， 投币，点赞，直播签到，自动兑换银瓜子为硬币，自动送出即将过期礼物，漫画App签到，大会员领取B币卷等。每天获得65点经验，助你快速升级到Lv6。 另外，通过结合GitHub Actions，可以实现每天线上自动运行，只要部署一次，小助手就会在背后一直默默地帮我们完成我们预先布置的任务。还有其他一些小功能，比如漫画签到、直播签到等等。
今日推荐英文原文：《Google scientists reportedly told to make AI look more ‘positive’ in research papers》作者：Corinne Reichert
推荐理由：路透社周三的报道称，谷歌母公司Alphabet一直要求其科学家确保人工智能技术在他们的研究论文中看起来更加 “积极”。据报道，一项新的审查程序已经到位，因此研究人员在探讨人脸分析以及种族、性别和政治背景等问题之前，会咨询谷歌的法律、政策或公关团队，进行 “敏感话题审查”。
Google scientists reportedly told to make AI look more ‘positive’ in research papersGoogle parent Alphabet has been asking its scientists to ensure that AI technology looks more “positive” in their research papers, says a Wednesday report by Reuters. A new review procedure is reportedly in place so researchers consult with Google’s legal, policy or PR teams for a “sensitive topics review” before exploring things like face analysis, and racial, gender and political affiliation.
“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues,” one of the internal webpages on the policy says, according to Reuters.
Other Google authors were told to “take great care to strike a positive tone,” internal correspondence shared with Reuters said.
The report follows Google CEO Sundar Pichai earlier this month apologizing for the handling of artificial intelligence researcher Timnit Gebru’s departure from the company and saying it would be investigated. Gebru left Google on Dec. 4, saying she’d been forced out of the company over an email sent to co-workers.
The email criticized Google’s Diversity, Equity and Inclusion operation, according to Platformer, which posted the full text of her missive. Gebru said in the posted email that she’d been asked to retract a research paper she’d been working on, after receiving feedback on it.
“You’re not supposed to even know who contributed to this document, who wrote this feedback, what process was followed or anything,” she wrote in the email. “You write a detailed document discussing whatever pieces of feedback you can find, asking for questions and clarifications, and it is completely ignored.
“Silencing marginalized voices like this is the opposite of the NAUWU principles which we discussed. And doing this in the context of ‘responsible AI’ adds so much salt to the wounds,” she added. NAUWU stands for “nothing about us without us,” the idea that policies shouldn’t be made without input from the people they affect.
Google didn’t immediately respond to a request for comment.