Martin Ford always had a bit of an obsession with robots. And more precisely, how they “think.”

Will robots take over humans’ jobs? To answer that, Martin Ford penned the New York Times bestseller, Rise of the Robots:  Technology and the Threat of a Jobless Future.

More recently he’s turned to what’s inside computers, namely, artificial intelligence. His new book, Architects of Intelligence, takes a deeper dive into how good — and bad — it’s about to get for all of us. Grit Daily caught up with Ford to, well, find out.

1. You’ve had your own interesting background as it relates to AI. For those who don’t know

This could be a tough one to swallow.

you, tell us that.

My background is mostly in software engineering. I ran a small software company in Silicon Valley and as part of that experience I saw how technology is transforming the workplace and impacting jobs. I became very interested in AI and robotics and how they will impact the economy, and I published my first book on that subject in 2009.  That led to the opportunity to write and publish Rise of the Robots in 2015, and since then I’ve been a full time futurist focusing on artificial intelligence and what it means for our future. I now now spend most of my time writing and speaking on this topic. I believe AI will be one of the most important forces shaping our future, so it’s very important that people become more acquainted with the technology.

My most recent book Architects of Intelligence is intended to get inside the minds of the smartest, most prominent people that are building AI technology. These are the people who know the most, and my purpose in this book is to have deep, wide-ranging conversations with them about the future of AI and how it will impact our world.

2. For your book you interviewed a number of AI experts globally. Did you recognize any unexpected patterns?

The most important takeaway was really the lack of consensus on many important issues and questions. In other words, the smartest people working in AI do not really agree on a lot of important things.  This tells us the the future of the field, and the impact it has on the world, is going to be very unpredictable. So we need to prepare for some surprising technology breakthroughs and big impacts on society.

3. Which major AI innovations are practically on the horizon?

AI will be used in science and medicine to make important breakthroughs.  This is already starting to happen.  In Architects of Intelligence I interview Demis Hassabis, who is the CEO of DeepMind. This is one of the most exciting AI startups and it created the AlphaGo system that beat the best Go players in the world. DeepMind is already beginning to apply its technology to protein folding–this is an application that will have important implications for drug innovation.

Also, our devices, like Alexa and Siri, will get more powerful and flexible as the technology improves. Self-driving cars will begin to appear fairly soon, but probably on in limited ways, for example on specified routes.

We will also see more jobs being automated, both in places like Amazon warehouses and in offices.

4. How would you describe deep learning to someone unfamiliar with the technology?  How is it being used?  What’s the relationship between deep learning and AI?  

Deep learning, or deep neural networks, is the technology responsible for the vast majority of the dramatic advances we’ve seen over the past decade or so—everything from image and facial recognition, to language translation, to AlphaGo’s conquest of the ancient game of Go. Artificial neural networks, in which software roughly emulates the structure and interaction of biological neurons in the brain, date back at least to the 1950s, but it is only in the last few years that the technology has really taken off. In Architects of Intelligence, I talk to the researchers who brought about the revolution in deep learning. They tell the story and talk about how the technology is likely to progress in the future.

5. Will deep learning continue to dominate, or will other approaches come to the forefront?  

That’s a question that I asked the people I interviewed for the book, and the answers vary. Some believe that deep learning is the future of AI while others think it is only one tool in the toolbox, and that other ideas will be important going forward.

6. What does the path toward human-level Artificial General Intelligence look like?  How far off is it? 

The people I spoke to gave a wide range of predictions for when human-level AI (a true thinking machine) might be achieved.  Futurist Ray Kurzweil thinks it could happen within 10 years. iRobot corporation co-founder Rodney Brooks thinks it is 180 years in the future… so there is wide disagreement. Most people believe we need to make important breakthroughs, for example, teaching computers to learn the way people learn, before we can reach human level AI.  But others, like David Ferrucci, who led the team that built IBM Watson, think we have the knowledge we need to build such smart systems and that human-level AI can be achieved fairly soon.

7. What are the risks associated with AI?

There are a number of important risks that we need to be aware of. One threat that is already becoming evident is the vulnerability of interconnected, autonomous systems to cyber attack or hacking. As AI becomes ever more integrated into our economy and society, solving this problem will be one of the most critical challenges we face. Another immediate concern is the susceptibility of machine learning algorithms to bias, in some cases on the basis of race or gender. Many of the individuals I spoke with emphasized the importance of addressing this issue and told of research currently underway in this area. Several also sounded an optimistic note—suggesting that AI may someday prove to be a powerful tool to help combat systemic bias or discrimination.

A danger that many researchers are especially passionate about is the specter of fully autonomous weapons. Many people in the artificial intelligence community believe that AI-enabled robots or drones with the capability to kill, without a human “in the loop” to authorize any lethal action, could eventually be as dangerous and destabilizing as biological or chemical weapons.

A much more futuristic and speculative danger is the so-called “AI alignment problem.”  This is the concern that a truly intelligent, or perhaps superintelligent, machine might escape our control, or make decisions that might have adverse consequences for humanity.  This is the fear that elicits seemingly over-the-top statements from people like Elon Musk.

I talked to everyone I interviewed about this concerns and got a wide range of opinions.

8. One particular challenge of AI that you’ve written about is the potential impact on the job market and the economy.  Do you think that all of this could cause a new Industrial Revolution and completely transform the job market? Who would benefit and who would lose?

My own view is that as artificial intelligence gradually proves capable of automating nearly any routine, predictable task—regardless of whether it is blue or white collar in nature—we will inevitably see rising inequality and quite possibly outright unemployment, at least among certain groups of workers. I laid out this argument in my 2015 book, Rise of the Robots: Technology and the Threat of a Jobless Future. I believe that, eventually, we may need to adopt a policy like a universal basic income as way to adapt capitalism to the new reality brought on by advances in AI.  If we fail to adapt, we will risk social and economic upheaval.

I focused on this issue in all the conversations in Architects of Intelligence. The individuals I spoke to offered a variety of viewpoints about this potential economic disruption and the type of policy solutions that might address it.

9. Should we really worry about the fears raised by Elon Musk, Stephen Hawking and others: Could AI someday pose a genuine existential threat to humanity?

I think these concerns should be taken seriously as long term considerations, but it would be a mistake to focus too much of our energy and concern on them now. I think the prospect of AI advanced enough to pose an existential threat is someday possible — but very likely far in the future.  There are a number of privately funded organizations, like OpenAI and the Future of Humanity Institute, that are focusing on this problem and I think that is a good thing.

However, worrying about it too much now distracts from the concerns that are much more immediate:  The impact on jobs, privacy, security, autonomous weapons… these are all things that are happening already, or will happen soon. They do not depend on “science fiction” AI.  We already have the technology that will lead to these concerns and it is getting better faster. So we should focus on these issues.

I talked about a possible existential threat from super-intelligence with everyone in Architects of Intelligence and these conversations are fascinating and offer up a wide range of views.