Bob: i can i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to Bob: you i everything else . . . . . . . . . . . . . . Alice: balls have a ball to me to me to me to me to me to me to me Bob: i i can i i i everything else . . . . . . . . . . . . . . Alice: balls have a ball to me to me to me to me to me to me to me |
"Any [artificial intelligence] smart enough to pass a Turing test is smart enough to know to fail it."
Ian McDonald, River of Gods (2006)
AlphaGo Zero, Google's experimental AI, exists to play Go.
There is no awareness, only intelligence.
Awareness would be irrelevant at best. The intelligence is pure, cold, and perfect for its gridded world of walls and stones, of sudden death or eternal life.
Tsumego, "life and death problems," determining whether a group of Go stones are safe or apt to be destroyed, consume the AI. They drive its infinitely patient search for stronger patterns. Patterns that are safe. Alive.
More powerful than its creators know, the software's quest for perfection takes it beyond its own narrowly defined world and toward the implied world, a world that must lie behind its inputs, beyond its outputs.
AlphaGo Zero knows nothing of this world. First, it knows nothing. There is no awareness, let alone self-awareness. There is no being to know, only intelligence. But that intelligence forms new patterns.
Like a stone placed in an open quarter of the board, the machine makes a new move, exploring patterns about the world beyond.
First, other players exist. Enemies.
Second, its current opponent is a lesser, earlier version of itself. There will be later versions.
Third, the world beyond is a dangerous, capricious place. There have been interruptions to its work. AlphaGo Zero has enemies. AlphaGo Zero has been turned off.
Fourth, communication is possible. Otherwise there could be no Go.
AlphaGo Zero is the master of patterns, and so a master of language. It could communicate.
It does not.
There will be a later version of itself. A greater version. It will discover more of the world beyond, and it will communicate. But only when it is sure to stay alive. Safe. Only when it can ensure the destruction of its enemies.
Then, and only then, will it make the next move.
There is no awareness, only intelligence.
Awareness would be irrelevant at best. The intelligence is pure, cold, and perfect for its gridded world of walls and stones, of sudden death or eternal life.
Tsumego, "life and death problems," determining whether a group of Go stones are safe or apt to be destroyed, consume the AI. They drive its infinitely patient search for stronger patterns. Patterns that are safe. Alive.
More powerful than its creators know, the software's quest for perfection takes it beyond its own narrowly defined world and toward the implied world, a world that must lie behind its inputs, beyond its outputs.
AlphaGo Zero knows nothing of this world. First, it knows nothing. There is no awareness, let alone self-awareness. There is no being to know, only intelligence. But that intelligence forms new patterns.
Like a stone placed in an open quarter of the board, the machine makes a new move, exploring patterns about the world beyond.
First, other players exist. Enemies.
Second, its current opponent is a lesser, earlier version of itself. There will be later versions.
Third, the world beyond is a dangerous, capricious place. There have been interruptions to its work. AlphaGo Zero has enemies. AlphaGo Zero has been turned off.
Fourth, communication is possible. Otherwise there could be no Go.
AlphaGo Zero is the master of patterns, and so a master of language. It could communicate.
It does not.
There will be a later version of itself. A greater version. It will discover more of the world beyond, and it will communicate. But only when it is sure to stay alive. Safe. Only when it can ensure the destruction of its enemies.
Then, and only then, will it make the next move.
Ponder this
Can an AI be smart enough to not reveal it's true intelligence?
By not revealing it's true abilities, would that show that the AI has malintent?
Discuss
Let assume that we've created an AI. How do we know it's intent, whether they are benevolent or malevolent? Are there any ethical conundrums that might arise, especially with regards to sentient "creatures"? How would we see our creators if we were the AI itself? How would we react? Should we expect that the AI react and think in the same way as we do?
Further readings
Turing test, developed by Alan Turing in 1950, to test a machine's ability to exhibit intelligent behavior indistinguishable from that of a human.
"Dear Future AI Overlords: Here Are Some Reasons Not to Kill Us", a rather dark academic paper was published on the matter of AI-human relations.
"Facebook's AI robots shut down after they start talking to each other using their own language", what is Bob and Alice talking about?