開源日報 每天推薦一個 GitHub 優質開源項目和一篇精選英文科技或編程文章原文,堅持閱讀《開源日報》,保持每日學習的好習慣。
今日推薦開源項目:《用得上的技巧 css_tricks》
今日推薦英文原文:《Superintelligence Versus You》

今日推薦開源項目:《用得上的技巧 css_tricks》傳送門:GitHub鏈接
推薦理由:一些在編寫 CSS 時可能用得上的技巧,比如垂直居中這樣幾乎人人都會用得上的和手風琴樣式這些看起來相當不錯的樣式編寫方法,如果要經常編寫網頁的話不妨來這裡看看。當然了,如果你實在對畫面右下角的那隻貓感興趣的話,項目底部也有關於貓的部分。
今日推薦英文原文:《Superintelligence Versus You》作者:Erik Hoel
原文鏈接:https://medium.com/s/story/superintelligence-vs-you-1e4a77177936
推薦理由:為什麼 AI 支配人類的黑暗未來只會是幻想

Superintelligence Versus You

upposedly atheist intellectuals are now spending a lot of time arguing over the consequences of creating 「God.」 Often they refer to this supreme being as a 「superintelligence,」 an A.I. that, in their thought experiments, possesses magical traits far beyond just enhanced intelligence. Any belief system needs a positive and negative aspect, and for this new religion-replacement, the 「hell」 scenario is that this superintelligence we cannot control might decide to conquer and destroy the world.

Like their antecedents—Hegel, Marx, J.S. Mill, Fukayama, and many others—these religion-replacement proposers view history as a progression toward some endpoint (often called a 「singularity」). This particular eschaton involves the creation of a superintelligence that either uplifts us or condemns us. The religious impulse of humans—the need to attribute purpose to the universe and history—is irrepressible even among devoted atheists. And, unfortunately, this worldview has been taken seriously by normally serious thinkers.

I and others have argued that rather than new technologies leading to some sort of end-of-history superintelligence, it』s much more likely that a 「tangled bank」 of all sorts of different machine intelligences will emerge: some small primitive A.I.s that mainly filter spam from email, some that drive, some that land planes, some that do taxes, etc. Some of these will be much more like individual cognitive modules, others more complex, but they will exist, like separate species, adapted to a particular niche. As with biological life, they will bloom across the planet in endless forms, most beautiful. This view is a lot closer to what』s actually happening in machine learning on a day-to-day basis.

Evolution is an endless game that』s fundamentally nonprogressive.


The logic behind this tangled bank is based on the fundamental limits of how you can build an intelligence as an integrated whole. Just like evolution, no intelligence can be good at solving all classes of problems. Adaptation and specialization are necessary. It』s this fact that ensures evolution is an endless game and makes it fundamentally nonprogressive. Organisms adapt to an environment, but that environment changes, maybe even due to that organism』s adaptation, and so on, for however long there is life. Put another way: Being good at some things makes it harder to do others, and no entity is good at everything.

In a nonprogressive view, intelligence is, from a global perspective, very similar to fitness. Becoming more intelligent at X often makes you worse at Y, and so on. This ensures that intelligence, just like life, has no fundamental endpoint. Human minds struggle with this view because without an endpoint there doesn』t seem to be much of a point either.

Despite the probable incoherence of a true superintelligence (all knowing, all seeing, etc.), some argue that, because we don』t fully know the formal constraints on building intelligences, it may be possible to build something that』s superintelligent in comparison to us and that operates over a similar class of problems. This more nuanced view argues that it might be possible to build something more intelligent than a human over precisely the kinds of domains humans are good at. This is kind of like an organism outcompeting another organism for the same niche.

Certainly this isn』t in the immediate future. But let』s assume, in order to show that concerns about the creation of superintelligence as a world-ending eschaton are overblown, that it is indeed possible to build something 1,000x smarter than a human across every problem-solving domain we engage in.

Even if that superintelligence were created tomorrow, I wouldn』t be worried. Such worries are based on a kind of Doctor Who-esque being. A being that, in any circumstance, can find some advantage via pure intelligence that enables victory to be snatched from the jaws of defeat. A being that, even if put in a box buried underground, would, just like Doctor Who, always be able to use its intelligence to both get out of the box and go on to conquer the entire world. Let』s put aside the God-like magical powers often granted superintelligences—like the ability to instantaneously simulate others』 consciousnesses just by talking to them or the ability to cure cancer without doing any experiments (you cannot solve X just by being smart if you don』t have sufficient data about X; ontology simply doesn』t work that way)—and just assume it』s merely a superintelligent agent lacking magic.

The important thing to keep in mind is that Doctor Who is able to continuously use intelligence to solve situations because the show writers create it that way. The real world doesn』t constantly have easy shortcuts available; in the real world of chaotic dynamics and P!=NP and limited data, there aren』t orders-of-magnitude more efficient solutions to every problem in the human domain of problems. And it』s not that we fail to identify these solutions because we lack the intelligence. It』s because they don』t exist.

An example of this is how often superintelligence can be beaten by a normal human at all sorts of tasks, given either the role of luck or small asymmetries between the human and the A.I. For example, imagine you are playing chess against a superintelligence of the 1,000x-smarter-than-humans-across-all-human-problem-solving-domains variety. If you』re one of the best chess-players in the world, you could at most hope for a tie, although you may never get one. Now let』s take pieces away from the superintelligence, giving it just pawns and its king. Even if you are, like me, not well-practiced at chess, you could easily defeat it. This is simply a no-win scenario for the superintelligence, as you crush it on the board, mercilessly trading piece for piece, backing it into a corner, finally toppling its king.

That there are natural upper bounds on performance from being intelligent isn』t some unique property of chess and its variants. In fact, as strategy games get more complex, intelligence often matters less. Because the game gets chaotic, predictions are inherently less precise due to amplifying noise, available data for those predictions becomes more limited, and brute numbers, positions, resources, etc., begin to matter more.

Let』s bump the complexity of the game you』re playing against the superintelligence up to the computer strategy game Starcraft. Again, assuming both players start perfectly equal, let』s grant the superintelligence an easy win. But, in this case, it would take only a minor change in the initial conditions to make winning impossible for the superintelligence. Tweaking, say, starting resources would put the superintelligence into all sorts of no-win scenarios against even a mediocre player. Even just delaying the superintelligence from starting the game by 30 seconds would probably be enough for great human players to consistently win. You can give the superintelligence whatever properties you want—maybe it thinks 1,000x faster than a human. But its game doesn』t run 1,000x faster, and by starting 30 seconds earlier, the human smokes it.

Intelligence is only one of many things that affect the outcome of even the most strategic games — and often not a very important one.


The point is that our judgments on how effective intelligence alone is for succeeding at a given task are based on situations when all other variables are fixed. Once you start manipulating those variables, instead of controlling for them, you see that intelligence is only one of many things that affect the outcome of even the most strategic games—and often not a very important one.

We can think of a kind of ultimate strategy game called Conquer the World. You』re born into this world with whatever resources you start with, and you, a lone agent, must conquer the entire earth and all its nations, without dying. I hate to break it to you: There』s no way to consistently win this game. It』s not just because it』s a hard game. It』s because there is no way to consistently win this game, no matter your intelligence or strategy—it just doesn』t exist. The real world doesn』t have polarity reversals and there are many tasks with no shortcuts.

The great whirlwind of limbs, births, deaths, careers, lovers, companies, children, consumption, nations, armies—that is, the great globe-spanning multitudinous mass that is humanity—has so many resources and numbers and momentum it is absurd to think that any lone entity could, by itself, ever win a war against us, no matter how intelligent that entity was. It』s like a Starcraft game where the superintelligence starts with one drone and we start with literally the entire map covered by our bases. It doesn』t matter how that drone behaves, it』s just a no-win scenario. Barring magical abilities, a single superintelligence, with everything beyond its senses hidden in the fog of war, with limited data, dealing with the exigencies and chaos and limitations that define the physical world, is in a no-win scenario against humanity. And a superintelligence, if it』s at all intelligent, would know this.

Of course, no thought experiment or argument is going to convince someone out of a progressive account of history, particularly if the progressive account operates to provide morality, structure, and meaning to what would otherwise be a cold and empty universe. Eventually the workers must rise up, or equality for all must be achieved, or the chosen nation-state must bestride the world, or we must all be uplifted into a digital heaven or thrown into oblivion. To think otherwise is almost impossible.

Human minds need a superframe that contains all others, that endows them with meaning, and it』s incredibly difficult to operate without one. This 「singularity」 is as good as any other, I suppose.

Humans just don』t do well with nonprogressive processes. The reason it took so long to come up with the theory of evolution by natural selection, despite its relatively simple logic and armchair-derivability, is its nonprogressive nature. These are things without linear frames, without beginnings or ends or reasons why. When I was studying evolutionary theory back in college, I remember at one moment feeling a dark logic click into place: Life was inevitable, design inevitable, yet it needed no watchmaker and had no point, and that this pointlessness was the reason why I was, why everyone was. But such a thought is slippery, impossible to hold onto for a human, to really keep believing in the trenches of the everyday. And so, when serious thinkers fall for silly thoughts about history coming to an end, we shouldn』t judge. Each of us, after all, engages in such silliness every morning when we get out of bed.

下載開源日報APP:https://openingsource.org/2579/
加入我們:https://openingsource.org/about/join/
關注我們:https://openingsource.org/about/love/