每天推薦一個 GitHub 優質開源項目和一篇精選英文科技或編程文章原文,歡迎關注開源日報。交流QQ群:202790710;電報群 https://t.me/OpeningSourceOrg
今日推薦開源項目:《HTML5 遊戲框架 Phaser》
推薦理由:Phaser 是一款開源的 HTML5 遊戲框架。JavaScript、TypeScript 雙重支持,特性支持豐富,支持IE9+、其他現代瀏覽器和Chrome(Android 2.2+)及Mobile Safari(iOS 5+)等移動瀏覽器。
教學
這有一篇文章 https://segmentfault.com/a/1190000009212221 講述了從 0 到 1 使用 Phaser 構建一個小遊戲的過程,講解得十分詳細,可以學習。
http://www.phaserengine.com/ Phaser 中文網有不錯的文檔、示例和源碼值得學習,推薦先從文檔學起再通過示例和源碼了解如何使用,最後定一個小目標學習上述文章從 0 到 1 構建一個自己的小遊戲。
新的一年目前 Phaser 已經發展到 3.1.1 版本,之前講到的都是基於 2.x.x 版本的,需要學習 3.x.x 版本的相關東西最好還是關注官網上的一些教程和案例 http://phaser.io/
Phaser 使用現狀
隨著微信小遊戲的推出,H5 遊戲的開發的熱度進一步升高,下文介紹了首批微信小遊戲中採用引擎的情況。
如何入門微信小遊戲開發,有哪些學習資料? – 朝花吸食的回答 – 知乎 https://www.zhihu.com/question/266024146/answer/302057348
雖然 Phaser 在小遊戲中開發使用中佔比最低但還是說明了目前 Phaser 已經能夠成功適配微信小遊戲,開發是不成問題的。
雖然作為一個開源輕量的遊戲框架在國內可能與 Cocos Creator 一類國內公司主導的遊戲引擎受歡迎的程度無法相比,但是 Phaser 對快速實現 2d 小遊戲還是十分友好的。
在 http://html5gameengine.com/ 的遊戲框架排行中,Phaser 位列第 4 並且評價也是極高,目前 Phaser 的更新也是十分活躍,也期待它越來越好。
今日推薦英文原文:《What』s the Difference Between Artificial Intelligence, Machine Learning, and Deep Learning?》
What』s the Difference Between Artificial Intelligence, Machine Learning, and Deep Learning?
This is the first of a multi-part series explaining the fundamentals of deep learning by long-time tech journalist Michael Copeland.
Artificial intelligence is the future. Artificial intelligence is science fiction. Artificial intelligence is already part of our everyday lives. All those statements are true, it just depends on what flavor of AI you are referring to.
For example, when Google DeepMind』s AlphaGo program defeated South Korean Master Lee Se-dol in the board game Go earlier this year, the terms AI, machine learning, and deep learning were used in the media to describe how DeepMind won. And all three are part of the reason why AlphaGo trounced Lee Se-Dol. But they are not the same things.
The easiest way to think of their relationship is to visualize them as concentric circles with AI — the idea that came first — the largest, then machine learning — which blossomed later, and finally deep learning — which is driving today』s AI explosion — fitting inside both.
From Bust to Boom
AI has been part of our imaginations and simmering in research labs since a handful of computer scientists rallied around the term at the Dartmouth Conferences in 1956 and birthed the field of AI. In the decades since, AI has alternately been heralded as the key to our civilization』s brightest future, and tossed on technology』s trash heap as a harebrained notion of over-reaching propellerheads. Frankly, until 2012, it was a bit of both.
Over the past few years AI has exploded, and especially since 2015. Much of that has to do with the wide availability of GPUs that make parallel processing ever faster, cheaper, and more powerful. It also has to do with the simultaneous one-two punch of practically infinite storage and a flood of data of every stripe (that whole Big Data movement) – images, text, transactions, mapping data, you name it.
Let』s walk through how computer scientists have moved from something of a bust — until 2012 — to a boom that has unleashed applications used by hundreds of millions of people every day.
Artificial Intelligence — Human Intelligence Exhibited by Machines
Back in that summer of 』56 conference the dream of those AI pioneers was to construct complex machines — enabled by emerging computers — that possessed the same characteristics of human intelligence. This is the concept we think of as 「General AI」 — fabulous machines that have all our senses (maybe even more), all our reason, and think just like we do. You』ve seen these machines endlessly in movies as friend — C-3PO — and foe — The Terminator. General AI machines have remained in the movies and science fiction novels for good reason; we can』t pull it off, at least not yet.
What we can do falls into the concept of 「Narrow AI.」 Technologies that are able to perform specific tasks as well as, or better than, we humans can. Examples of narrow AI are things such as image classification on a service like Pinterest and face recognition on Facebook.
Those are examples of Narrow AI in practice. These technologies exhibit some facets of human intelligence. But how? Where does that intelligence come from? That get us to the next circle, Machine Learning.
Machine Learning — An Approach to Achieve Artificial Intelligence
Machine Learning at its most basic is the practice of using algorithms to parse data, learn from it, and then make a determination or prediction about something in the world. So rather than hand-coding software routines with a specific set of instructions to accomplish a particular task, the machine is 「trained」 using large amounts of data and algorithms that give it the ability to learn how to perform the task.
Machine learning came directly from minds of the early AI crowd, and the algorithmic approaches over the years included decision tree learning, inductive logic programming. clustering, reinforcement learning, and Bayesian networks among others. As we know, none achieved the ultimate goal of General AI, and even Narrow AI was mostly out of reach with early machine learning approaches.
As it turned out, one of the very best application areas for machine learning for many years was computer vision, though it still required a great deal of hand-coding to get the job done. People would go in and write hand-coded classifiers like edge detection filters so the program could identify where an object started and stopped; shape detection to determine if it had eight sides; a classifier to recognize the letters 「S-T-O-P.」 From all those hand-coded classifiers they would develop algorithms to make sense of the image and 「learn」 to determine whether it was a stop sign.
Good, but not mind-bendingly great. Especially on a foggy day when the sign isn』t perfectly visible, or a tree obscures part of it. There』s a reason computer vision and image detection didn』t come close to rivaling humans until very recently, it was too brittle and too prone to error.
Time, and the right learning algorithms made all the difference.
Deep Learning — A Technique for Implementing Machine Learning
Another algorithmic approach from the early machine-learning crowd, Artificial Neural Networks, came and mostly went over the decades. Neural Networks are inspired by our understanding of the biology of our brains – all those interconnections between the neurons. But, unlike a biological brain where any neuron can connect to any other neuron within a certain physical distance, these artificial neural networks have discrete layers, connections, and directions of data propagation.
You might, for example, take an image, chop it up into a bunch of tiles that are inputted into the first layer of the neural network. In the first layer individual neurons, then passes the data to a second layer. The second layer of neurons does its task, and so on, until the final layer and the final output is produced.
Each neuron assigns a weighting to its input — how correct or incorrect it is relative to the task being performed. The final output is then determined by the total of those weightings. So think of our stop sign example. Attributes of a stop sign image are chopped up and 「examined」 by the neurons — its octogonal shape, its fire-engine red color, its distinctive letters, its traffic-sign size, and its motion or lack thereof. The neural network』s task is to conclude whether this is a stop sign or not. It comes up with a 「probability vector,」 really a highly educated guess, based on the weighting. In our example the system might be 86% confident the image is a stop sign, 7% confident it』s a speed limit sign, and 5% it』s a kite stuck in a tree ,and so on — and the network architecture then tells the neural network whether it is right or not.
Looking for a crash course in AI? Check out our cheat sheet to find the top courses to learn AI and machine learning, fast.
Even this example is getting ahead of itself, because until recently neural networks were all but shunned by the AI research community. They had been around since the earliest days of AI, and had produced very little in the way of 「intelligence.」 The problem was even the most basic neural networks were very computationally intensive, it just wasn』t a practical approach. Still, a small heretical research group led by Geoffrey Hinton at the University of Toronto kept at it, finally parallelizing the algorithms for supercomputers to run and proving the concept, but it wasn』t until GPUs were deployed in the effort that the promise was realized.
If we go back again to our stop sign example, chances are very good that as the network is getting tuned or 「trained」 it』s coming up with wrong answers — a lot. What it needs is training. It needs to see hundreds of thousands, even millions of images, until the weightings of the neuron inputs are tuned so precisely that it gets the answer right practically every time — fog or no fog, sun or rain. It』s at that point that the neural network has taught itself what a stop sign looks like; or your mother』s face in the case of Facebook; or a cat, which is what Andrew Ng did in 2012 at Google.
Ng』s breakthrough was to take these neural networks, and essentially make them huge, increase the layers and the neurons, and then run massive amounts of data through the system to train it. In Ng』s case it was images from 10 million YouTube videos. Ng put the 「deep」 in deep learning, which describes all the layers in these neural networks.
Today, image recognition by machines trained via deep learning in some scenarios is better than humans, and that ranges from cats to identifying indicators for cancer in blood and tumors in MRI scans. Google』s AlphaGo learned the game, and trained for its Go match — it tuned its neural network — by playing against itself over and over and over.
Thanks to Deep Learning, AI Has a Bright Future
Deep Learning has enabled many practical applications of Machine Learning and by extension the overall field of AI. Deep Learning breaks down tasks in ways that makes all kinds of machine assists seem possible, even likely. Driverless cars, better preventive healthcare, even better movie recommendations, are all here today or on the horizon. AI is the present and the future. With Deep Learning』s help, AI may even get to that science fiction state we』ve so long imagined. You have a C-3PO, I』ll take it. You can keep your Terminator.
每天推薦一個 GitHub 優質開源項目和一篇精選英文科技或編程文章原文,歡迎關注開源日報。交流QQ群:202790710;電報群 https://t.me/OpeningSourceOrg