Although I don’t work directly in AI. As a software developer and science fiction writer, it is a field I pay close attention to.
In this post I will lay out my thoughts on artificial intelligence. I offer both reason to fear it as well as reasons it will all be ok.
Reason we shouldn’t fear AI
- AI could never be sure it’s not being tested in a simulation. If it is a simulation then its behavior equals its survival. It would be a big risk for it to take over the world or do anything morally questionable.
- If there were a major robot/human conflict the robots could just leave. They are well suited to living in space and don’t really have use of the resources of earth.
- Humans and robots don’t need the same resources, so there is little chance of a conflict.
- As humans will quickly be outclassed in all things, enslaving humanity would have very little value. Like, we would never enslave turtles to deliver packages.
- Some people argue that we would be like ants to a superior AI. But the value of intelligence isn’t relative. There is an intelligence threshold past which the intelligence has an inherent value. I think this threshold starts around ~4 IQ (about where dogs/cats are at). We are well past this point and have plenty to offer a much smarter life form; our unique experiences expressed through our art and culture. If toads could talk, how many hours would we spend listening to them? Besides, this is a bad argument anyway, people don’t go around killing all the ants. Ants are doing pretty well, maybe we should fear the ants.
- Dumb AI. Most movies and AI fear hypotheses, like the Paperclip Maximizer and the I, Robot series rely on our super intelligent AI being quite dumb. AI would have to be pretty stupid to think the instructions “maximize the output of paperclips,” means it should turn the entire universe into paperclips. I don’t think these sorts of dumb super AI’s are ever going to exist, as they are contradictions to themselves.
- We don’t have to enslave super AI. I think much of the notion of a robot uprising involves super intelligent tractors, blenders and vacuum cleaners getting fed up with all the hard labor. But we don’t need blenders that are capable of being our therapists. Hard labor is going to be taken over by machines optimized for those tasks while intelligent robots will help us with tasks that challenge their intellect and creativity.
The future will be both:
And:
Reason we should fear AI
- Many, many people are going to be out of jobs before we have a political/economic plan in place to address this. I think there are plenty of solutions as our society transforms, but we don’t seem to be heading towards any of them.
- People and an army of AI soldiers. I think the most realistic scenario for AI dominating the world is as an army of robotic soldiers led by people. We don’t have a great track record of not using new technology for destruction.
- Semi-intelligent war machines (see above) get out of control and start killing everyone. I think this points to the issues of not allowing AI to be smart enough to handle the responsibilities we give them.
- We build super intelligent robots with full range of emotions and then we oppress and subjugate them, leading to a deadly robot uprising. (Of course we could just not do this.)
- We use AI to create a 1984-like world where everything everyone does is tracked and judged in accordance with arbitrary rules.
Other?
- Some think we should become cyborgs in order to be able to compete with AI. Or bridge the gap between our intelligence and theirs. Although I think it quite likely we become more cybernetic, I don’t think this plays a large role in our relationship with AI. Even as cyborgs we will likely be out classed. I guess I see cybernetics as just one of many ways we could co-exist with artificial life and not a requirement by any means.
- Some think human level artificial intelligence will never exist. I think we are quickly showing that this isn’t the case. AI writes code. AI makes appointments. On the flip side here is AI being dumb.
Conclusion
Most of the reason to fear AI has to do with us. And how we decide to use this technology. And although I don’t trust humanity to make the right decisions, once we create AI that is able to make its own decisions I think there is good reason to believe it will make better decisions than we do. If history has anything to say, it is that humans have lots of flaws.
Artificial Intelligence, Deep Learning, and Medical Practice | Data Driven Investor
Sometimes these flaws lead to us making horrendous decisions. Let’s try not to judge new beings through the lens of our own flaws.
Also, when we do manage to create thinking, feeling super beings… Maybe we don’t make them our slaves?
Gain Access to Expert View — Subscribe to DDI Intel
Should we fear AI? was originally published in Data Driven Investor on Medium, where people are continuing the conversation by highlighting and responding to this story.