每天推薦一個 GitHub 優質開源項目和一篇精選英文科技或編程文章原文,歡迎關注開源日報。
開源日報的第0期推出已經有一段時間啦,今天我們終於正式推出第1期開源日報,敬請關注。
今日推薦開源項目:《GIF製作工具Gifski》;GitHub 地址:https://github.com/NickeManarin/ScreenToGif
推薦理由:Gifski這個開源程序可以將一系列圖片或一段視頻轉化為高質量的gif,轉換不難,難得的是保持著高質量,這個項目也是入圍了 GitHub Trending 增長最快的repo排行榜。
使用
Gifski實際上適用於windows,mac以及linux三個平台,唯一不同的是,mac平台上的gifski內置了視頻分幀工具,因此可以直接把視頻拖入程序窗口即可生成gif,而其它平台上則只能使用第三方程序分幀後才能處理,並且要在命令行中運行。
(在這裡插入一張圖片,太大自行嘗試上傳文件:官方實例圖片.gif)
不過這並不是什麼大問題,還記得開源工場之前的延展ffmpeg嗎
https://openingsource.org/553/?這款命令行程序就是完美的分幀工具。下面筆者將以一段視頻為例,演示這兩個工具的使用教程。
環境:win10
視頻源:祖婭納惜-逆浪春秋,該視頻FPS30,更能體現這個工具的性能。
如果你是剛接觸命令行工具,請按照以下步驟進行:
將ffmpeg,gifski,準備轉換的視頻放在同一個文件夾。
按Win+R,輸入cmd,打開命令提示行。
輸入 X:,X是你的文件夾所在盤符。
輸入 cd xxxx/xxxx 即文件夾所在目錄。不需要輸入盤符。
此時就可以開始調用這兩個工具了。
運行cmd,調用ffmpeg,開始分幀:
< ffmpeg -i video.mp4 frame%04d.png>
此處%04d表示會從0001開始計數。如果視頻較長可以調大,但不建議,因為太多會極大影響GIF的生成時間,實際上,1000幀就已經要花去十幾分鐘來生成了。
處理完成後,刪掉你不需要的幀。
接下來就要把這些圖片整合到一起,成為一張GIF。
運行cmd,調用gifski:
<gifski -o file.gif frame*.png>
友情提示,本機處理1280×720大小的gif的處理速度大概是0.7秒/幀,製作GIF的時候請注意時間。另外這裡使用的是默認參數:幀率20。
由於幀率是20較原視頻小,因此顯得較慢。
程序提供的參數:
這裡我們嘗試使用30幀速率再次製作,並且取名為h_sped.gif:
gifski -o h_sped.gif –fps 30 frame*.png
效果如下:
(由於gif文件過大,此處僅提供文字描述)
可以發現,到後面會有較為明顯的掉幀現象,這有可能是程序的問題,更有可能是電腦性能所限。實際使用中,不建議使用這麼大的圖片以及這麼長的片段,畢竟用戶要看這麼大的GIF也是得頗費周章。
總體來說gifski是個強大的高質量gif工具,但功能較為單一,且需要先依靠ffmpeg分割為png,操作略顯繁瑣,而且大小無法進行選取。所以相比之下,如若在windows下則並不推薦使用gifski。可以嘗試下面推薦的軟體。
而在Linux下ubantu自帶的ImageMagick也幾乎可以得到完全相同的效果(速度較快,質量高),操作也十分相近,可以說不相上下,同時這兩款也都十分依靠ffmpeg。Mac平台下不清楚其他軟體的特性,反正gifski同樣適用。
實際上,在比較的時候,筆者還發現了一個神奇的集大成者:ScreentoGIF。
它把ffmpeg與gifski整合到了一起,還支持錄屏功能,儘管是閹割版,不能支持所有的視頻格式,但是也已經非常強力了,還可以手動選擇要轉化的幀,最關鍵的是,這是一個窗口程序而不是命令行程序!不懂命令行或者厭煩命令行操作的小夥伴們可有福了,之前的命令行教程權當學習一下吧……(別打死我2333)
今日推薦英文原文:來自 Google 的《Machine learning meets culture》
推薦理由:機器學習是時下時髦熱鬧的話題,繁華背後有著怎樣的人文境遇和思考,當機器學習遇到文化,交匯在這十字路口,又有怎樣奇妙的反應呢?不管是人類還是機器,都會遇到,也遲早會思索這個問題。現在,一群科學家們已經在用機器學習來處理藝術品了。
Machine learning meets culture
Whether helping physicians identify disease or finding photos of 「hugs,」 AI is behind a lot of the work we do at Google. And at our Arts & Culture Lab in Paris, we』ve been experimenting with how AI can be used for the benefit of culture. Today, we』re sharing our latest experiments—prototypes that build on seven years of work in partnership the 1,500 cultural institutions around the world. Each of these experimental applications runs AI algorithms in the background to let you unearth cultural connections hidden in archives—and even find artworks that match your home decor.
Art Palette
From interior design to fashion, color plays a fundamental role in expression, communicating personality, mood and emotion. Art Palette lets you choose a color palette, and using a combination of computer vision algorithms, it matches artworks from cultural institutions from around the world with your selected hues. See how Van Gogh's Irises share a connection of color with a 16th century Iranian folio and Monet』s water lilies. You can also snap a photo of your outfit today or your home decor and can click through to learn about the history behind the artworks that match your colors.
Watch how legendary fashion designer, Sir Paul Smith uses Art Palette:
Giving historic photos a new lease on LIFE
Beginning in 1936, LIFE Magazine captured some of the most iconic moments of the 20th century. In its 70-year-run, millions of photos were shot for the magazine, but only 5 percent of them were published at the time. 4 million of those photos are now available for anyone to look through. But with an archive that stretches 6,000 feet (about 1,800 meters) across three warehouses, where would you start exploring? The experiment LIFE Tags uses Google』s computer vision algorithm to scan, analyze and tag all the photos from the magazine』s archives, from the A-line dress to the zeppelin. Using thousands of automatically created labels, the tool turns this unparalleled record of recent history and culture into an interactive web of visuals everyone can explore. So whether you』re looking for astronauts, an Afghan Hound or babies making funny faces, you can navigate the LIFE Magazine picture archive and find them with the press of a button.
Identifying MoMA artworks through machine learning
Starting with their first exhibition in 1929, The Museum of Modern Art in New York took photos of their exhibitions. While the photos documented important chapters of modern art, they lacked information about the works in them. To identify the art in the photos, one would have had to comb through 30,000 photos—a task that would take months even for the trained eye. The tool built in collaboration with MoMA did the work of automatically identifying artworks—27,000 of them—and helped turn this repository of photos into an interactive archive of MoMA』s exhibitions.
We unveiled our first set of experiments that used AI to aid cultural discoveries in 2016. Since then we』ve collaborated with institutions and artists, including stage designer Es Devlin, who created an installation for the Serpentine Galleries in London that uses machine learning to generate poetry. We hope these experimental applications will not only lead you to explore something new, but also shape our conversations around the future of technology, its potential as an aid for discovery and creativity.
You can try all our experiments at g.co/artsexperiments or through the free Google Arts & Culture app for iOS and Android.
每天推薦一個 GitHub 優質開源項目和一篇精選英文科技或編程文章原文,歡迎關注開源日報。交流QQ群:202790710