AI Can Now Self-Reproduce - Should Humans Be Worried (YouTube Content)
AI Can Now Self-Reproduce—Should Humans Be Worried? | |
Information | |
---|---|
Guest(s) | Eric Weinstein |
Length | 00:05:30 |
Release Date | 22 May 2017 |
Links | |
YouTube | Watch |
Portal Blog | Read |
All Appearances |
AI Can Now Self-Reproduce—Should Humans Be Worried? was a video with Eric Weinstein on Big Think.
Description[edit]
Those among us who fear world domination at the metallic hands of super-intelligent AI have gotten a few steps ahead of themselves. We might actually be outsmarted first by fairly dumb AI, says Eric Weinstein. Humans rarely create products with a reproductive system—you never have to worry about waking up one morning to see that your car has spawned a new car on the driveway (and if it did: cha-ching!), but artificial intelligence has the capability to respond to selective pressures, to self-replicate and spawn daughter programs that we may not easily be able to terminate. Furthermore, there are examples in nature of organisms without brains parasitizing more complex and intelligent organisms, like the mirror orchid. Rather than spend its energy producing costly nectar as a lure, it merely fools the bee into mating with its lower petal through pattern imitation: this orchid hijacks the bee's brain to meet its own agenda. Weinstein believes all the elements necessary for AI programs to parasitize humans and have us serve its needs already exists, and although it may be a "crazy-sounding future problem which no humans have ever encountered," Weinstein thinks it would be wise to devote energy to these possibilities that are not as often in the limelight.
ERIC WEINSTEIN:
Eric Weinstein is an American mathematician and economist. He earned his Ph.D in mathematical physics from Harvard University in 1992, is a research fellow at the Mathematical Institute of Oxford University, and is a managing director of Thiel Capital in San Francisco. He has published works and is an expert speaker on a range of topics including economics, immigration, elite labor, mitigating financial risk and incentivizing of creative risks in the hard sciences.
Transcript[edit]
There are a bunch of questions next to or adjacent to general Artificial Intelligence that have not gotten enough alarm because, in fact, there’s a crowding out of mind share. I think that we don’t really appreciate how rare the concept of selection is in the machines and creations that we make. So in general, if I have two cars in the driveway I don’t worry that if the moon is in the right place in the sky and the mood is just right that there will be a third car at a later point, because in general I have to go to a factory to get a new car. I don’t have a reproductive system built into my sedan. Now almost all of the other physiological systems—what are there, perhaps 11?—have a mirror.
So my car has a brain, so it’s got a neurological system. It’s got a skeletal system in its steel, but it lacks a reproductive system. So you could ask the question: are humans capable of making any machines that are really self-replicative? And the fact of the matter is that it’s very tough to do at the atomic layer but there is a command in many computer languages called Spawn. And Spawn can effectively create daughter programs from a running program.
Now as soon as you have the ability to reproduce you have the possibility that systems of selective pressures can act because the abstraction of life will be just as easily handled whether it’s based in our nucleotides, in our A, C, Ts and Gs, or whether it’s based in our bits and our computer programs. So one of the great dangers is that what we will end up doing is creating artificial life, allowing systems of selective pressures to act on it and finding that we have been evolving computer programs that we may have no easy ability to terminate, even if they’re not fully intelligent.
Further if we look to natural selection and sexual selection in the biological world we find some very strange systems, plants or animals with no mature brain to speak of effectively outsmart species which do have a brain by hijacking the victim species’ brain to serve the non-thinking species. So, for example, I’m very partial to the mirror orchid which is an orchid whose bottom petal typically resembles the female of a pollinator species. And because the male in that pollinator species detects a sexual possibility the flower does not need to give up costly and energetic nectar in order to attract the pollinator. And so if the plant can fool the pollinator to attempt to mate with this pseudo-female in the form of its bottom petal, it can effectively reproduce without having to offer a treat or a gift to the pollinator but, in fact, parasitizes its energy. Now how is it able to do this? Because if a pollinator is fooled then that plant is rewarded. So the plant is actually using the brain of the pollinator species, let’s say a wasp or a bee, to improve the wax replica, if you will, which it uses to seduce the males.
That which is being fooled is the more neurologically advanced of the two species. And so what I've talked about, somewhat controversially, is what I call Artificial Out-telligence. Where instead of actually having an artificially intelligent species you can imagine a dumb computer program that uses the reward, through let's say genetic algorithms and selection within a computer framework, to increasingly parasitize using better and better lures, fully intelligent humans.
And in the case of Artificial Intelligence I don't think we're there yet. But in the case of Artificial Outelligence, I can't find anything that's missing from the equation. So we have self-modifying code. You have Bitcoin so you could have a reward structure and Blockchains. And there's nothing that I see that keeps us from creating.
Now that’s such a such a strange and quixotic possibility. Now in this framework I don't see an existential risk so that my friends who worry about machine intelligence being a terminal invention for the human species probably don't need to be worried.
But I think that there's a lot of exotica around Artificial Intelligence which hasn't been explored and I think which is much closer to fruition. Perhaps that's good. Maybe it's a warning shot so that we're going to find that we just as we woke up to Bitcoin as digital gold we may wake up to a precursor to artificial general intelligence which alerts us to the fact that we should probably be devoting more energy into this absolutely crazy sounding future problem which no humans have ever encountered.