今日推荐开源项目:《回想模式 git-history》
今日推荐英文原文:《Why The World Needs Trustworthy Chatbots》
今日推荐开源项目:《回想模式 git-history》传送门:GitHub链接
推荐理由:调查一个 GitHub 上的文件历史需要几步?1.打开浏览器 2.在浏览器中打开文件 3.把 github.com 改成 github.githistory.xyz。这个项目可以让你轻松简单的实现对某个文件的历史进行调查,如果经常使用 GitHub 的话,这玩意可以帮上大忙的。
今日推荐英文原文:《Why The World Needs Trustworthy Chatbots》作者:Allan Froy
原文链接:https://towardsdatascience.com/why-the-world-needs-trustworthy-chatbots-aab5db94dbf8
推荐理由:在人类社会中信任很重要,同样的,对于需要交换信息的机器人,信任同样是是重要的
Why The World Needs Trustworthy Chatbots
The notion of trust underpins so much of society, whether we realise it or not. In modern times, trust is driving the success of new decentralised business models. Trust expert, Rachel Botsman, describes how businesses like AirBnB and Uber are thriving in this new collaborative economy. They just wouldn’t exist without trust; it’s what makes them work. All these companies do is facilitate so-called leaps of trust between individuals.Human beings have a natural propensity to want to trust others, it’s what drives us forward and is key to relationship building. Our brains are hard-wired, from years of evolution, to make assessments on trust and the trustworthiness of others, and this is ultimately what makes it possible for nearly 8 billion people to co-exist on our planet.
But, apparently, the bots are coming, so it follows that humans are moving rapidly into a world where we interact with bots on a daily basis. If that’s the case, then how do we build a relationship with those bots and how do we know if we should trust them?
Why is trust important?
You might argue that we don’t need to trust the bots that are performing simple tasks. The thing is, we already do. If I ask Alexa to set a timer for 5 minutes, I trust that I will be notified 5 minutes later. If I give my contact details to a bot greeting me on a website, I trust that those details will be passed on to the owners of the site.Consider a more complex bot, maybe one with a high-stakes outcome. Bots, in theory, could provide personalised financial advice based on observations from your everyday life. A bot could be trained as an expert financial adviser and make sure you have the right investments, that your next house purchase works for you and that you never miss an upcoming bill. Today, financial advice is a very personal profession. As humans, we put a lot of faith, or trust, in the people who claim to be licensed as financial advisers. So, what would it take for humans to make significant financial decisions based on advice from a bot?
How do we define trust?
According to Professor James Davis from Utah State University, there are three required components for building trust.- Ability – can you do what I’m expecting of you?
- Integrity – do we share any common values or beliefs?
- Benevolence – are you going to act in my best interests?
But what does any of this have to do with bots? Can you look up the professional qualifications of a bot, or get a sense of their personal interests and drivers? How would you test if a bot is going to act in your own interests?
Why chatbots fail miserably.
Most chatbots fail miserably at integrity and benevolence, whilst a large proportion seems to struggle with ability as well. The main problem with chatbot technology is the exponential speed at which it is advancing. The chatbot hype promised personalised in-channel experiences for everyone, but the current technology is really only suited to question and answer or FAQ type interactions. There are technical limitations which many bot creators are not necessarily aware of. Compounding this problem is a plethora of freely available online tools which allow anyone to build their own chatbot.It’s really easy to build a chatbot. Unfortunately, it’s also really easy to build a frustrating user experience.There is a huge mismatch between user expectation and the technical capability behind many chatbots, which quickly leads to a frustrating user experience.
Think back to when you last used a chatbot.
How many times did a bot fail to understand your request, appear to lose its train of thought, forget an answer you literally just gave it or fail to notice the fact that you answered a question with another question?
How many times did you feel like you were having a conversation?
I suspect that even if you’ve had a good chatbot experience that you can recall many more poor experiences.
Unfortunately, there is nothing natural or conversational about interactions with most chatbots in the market today.
High-stakes chatbots need to communicate like humans.
To build a financial adviser bot, you can’t get away with these technical flaws. It is imperative that, as a human, you can be happy that the bot is performing the best it can with the data you know it has access to.A conversation about money needs to be more natural; it needs to be contextual, it needs to be personal.This is where the cutting-edge research in Conversational AI is currently focused. Some of this research is available to the public with companies such as Rasa creating open-source tools for building contextual AI assistants. There is no doubt that the big technology companies are also focusing their efforts in this space and will bring these advances to the masses within the next couple of years. We’re finally making progress in teaching computers how to communicate like humans.
However, we need to tread carefully on this journey.
Consider that, to create perceived ability and integrity, programming a bot with a personalised back story would be akin to an actor pretending to be someone they’re not, a salesman deftly adapting his sales tactics on the fly, or even a conman looking legitimate enough to steal your money. Consider how Microsoft’s Tay experiment turned racist when the majority of its training data came from the same belief set.
AI technology currently doesn’t have a sense of right or wrong and humans can deal with that by putting controls around the data the algorithms can work on. This means there will always be a human bias to any programmed bot which allows us to build in traits which appeal to us as humans.
We need to take advantage of this level of control to build bots we can trust and accept before we create algorithms that can figure out the concept of trust for themselves.
The field of Conversational AI is rapidly advancing and is on the cusp of being able to deliver some truly revolutionary experiences that users can trust. There are ethical hurdles to overcome, however, I’m confident we will develop trustworthy digital financial adviser bots that become part of our everyday lives. That day is coming. Trust me, I’m human.
下载开源日报APP:https://openingsource.org/2579/
加入我们:https://openingsource.org/about/join/
关注我们:https://openingsource.org/about/love/