每天推薦一個 GitHub 優質開源項目和一篇精選英文科技或編程文章原文,歡迎關注開源日報。交流QQ群:202790710;微博:https://weibo.com/openingsource;電報群 https://t.me/OpeningSourceOrg


今日推薦開源項目:《別人家的 HTML 和 CSS(Ver3.0) purecss-zigario》傳送門:GitHub鏈接

推薦理由:又到了觀賞藝術的時間了,還是同一個作者,還是同一個規矩。這次的作品是一個中世紀風格的香煙廣告,說真的每次看到那句完全使用 HTML 和 CSS 手工編碼,然後看看之前自己練手時寫下的東西……能做出這些作品的人,真乃神人也。

觀賞鏈接:https://cyanharlow.github.io/purecss-zigario/


今日推薦英文原文:《Why engineers need to know human rights》作者:DigitalAgenda

原文鏈接:https://medium.com/digitalagenda/why-engineers-need-to-know-human-rights-347e6e7cb8b0

推薦理由:為什麼需要工程師了解人權?這個問題的答案很簡單——技術本身並沒有錯,但是如果不能正確的使用技術,那就會產生錯誤

Why engineers need to know human rights

Technology dominates our lives — and can transgress all kinds of legal frameworks. That』s why we should teach human rights law to software engineers, says Ana Beduschi.

Artificial intelligence (Ai) is finding its way into more and more aspects of our daily lives. It powers the smart assistants on our mobile phones and virtual home assistants. It is in the algorithms designed to improve our health diagnostics. And it is used in the predictive policing tools used by the police to fight crime.

Each of these examples throws up potential problems when it comes to the protection of our human rights. Predictive policing, if not correctly designed, can lead to discrimination based on race, gender or ethnicity.

Privacy and data protection rules apply to information related to our health. Similarly, systematic recording and use of our smartphones』 geographical location may breach privacy and data protection rules and it could lead to concerns over digital surveillance by public authorities.

Software engineers are responsible for the design of the algorithms behind all of these systems. It is the software engineers who enable smart assistants to answer our questions more accurately, help doctors to improve the detection of health risks, and allow police officers to better identify pockets of rising crime risks.

Software engineers do not usually receive training in human rights law. Yet with each line of code, they may well be interpreting, applying and even breaching key human rights law concepts — without even knowing it.

This is why it is crucial that we teach human rights law to software engineers. Earlier this year, new EU regulation forced businesses to become more open with consumers about the information they hold. Known as GDPR, you may remember it as the subject of numerous desperate emails begging you to opt in to remain on various databases.

GDPR increased restrictions on what organisations can do with your data, and extends the rights of individuals to access and control data about them. These moves towards privacy-by-design and data protection-by-design are great opportunities to integrate legal frameworks into technology. On their own, however, they are not enough.

For example, a better knowledge of human rights law can help software developers understand what indirect discrimination is and why it is prohibited by law. (Any discrimination based on race, colour, sex, language, religion, political or other opinion, national or social origin, property, association with a national minority, birth or other status is prohibited under article 14 of the European Convention on Human Rights.)

Direct discrimination occurs when an individual is treated less favourably based on one or more of these protected grounds. Indirect discrimination occurs when a rule that is neutral in appearance leads to less favourable treatment of an individual (or a group of individuals).

Similarly, understanding the intricacies of the right to a fair trial and its corollary, presumption of innocence, may lead to better informed choices in algorithm design. That could help avoid the possibility that algorithms would presume that the number of police arrests in a multi-ethnic neighbourhood correlates with the number of effective criminal convictions.

Even more importantly, it would assist them in developing unbiased choices of datasets that are not proxies for discrimination based on ethnicity or race. For example, wealth and income data combined with geographic location data may be used as a proxy for the identification of populations from a certain ethnic background if they tend to concentrate in a particular neighbourhood.

Likewise, a better understanding of how legal frameworks on human rights operate may stimulate the creation of solutions for enhancing compliance with legal rules. For instance, there is a great need for technological due process solutions, by which individuals could easily challenge AI-based decisions made by public authorities that directly affect them. This could be the case of parents who would be wrongly identified as potential child abusers by opaque algorithms used by local authorities.

Such solutions could also be relevant to the private sector. For example, decisions on insurance premiums and loans are often determined by profiling and scoring algorithms hidden behind black boxes. Full transparency and disclosure of these algorithms may not be possible or desirable due to the nature of these business models.

Thus, a due process-by-design solution could allow individuals to easily challenge such decisions before accepting an offer. As our contemporary societies inexorably evolve towards intensive AI applications, we need to bear in mind that the humans behind the AI curtain have the power to make (mis)informed decisions that affect us all.

It is high time that resources and energy are reverted towards educating them not only in cutting edge technology — but also on the relevant human rights rules.

Ana Beduschi is a senior lecturer in law at the University of Exeter. This article is republished from The Conversation under a creative commons licence.


每天推薦一個 GitHub 優質開源項目和一篇精選英文科技或編程文章原文,歡迎關注開源日報。交流QQ群:202790710;微博:https://weibo.com/openingsource;電報群 https://t.me/OpeningSourceOrg