Q&A With Joshua Gans, Co-author of Prediction Machines

I was fortunate to receive answers from Joshua Gans, one of the co-authors of Prediction Machines. Read my review of this book. Joshua Gans is a Professor of Strategic Management and holder of the Jeffrey S. Skoll Chair of Technical Innovation and Entrepreneurship at the Rotman School of Management, University of Toronto.

Q. Most of the use cases of machine learning you typically see are for making more money (showing ads, keeping users longer on a platform, etc). Economically that makes sense: invest to make more. Is there a case to be made to use machine learning/AI for good, not counting uses for efficiency gain?

I don’t think there is a trade-off. It is a new innovation, it can be used for efficiency gain or for social good.

Q. What are some examples you have seen when AI was/is used for social good?

I would argue that all of the new medical applications are unambiguously for the social good. Anything that gets down the cost of being able to accurately diagnose conditions will have great benefits.

Q. You say in your book that humans are still the ones making most of the decisions based on the predictions of machines.

If the machines get better at judgement and decision making than humans, do you see that if humans were to find a job, their income will go down?

If a machine can replace all that a human can do then this may reduce their incomes. However, what it can also do is open up opportunities for humans to do other stuff. There are few people I know that get everything done in their jobs.

Q. As you say in your book humans have biases and a machine can learn or be taught biases. Can a pure model ever be created?

I am not sure what a pure model is but the difference between humans and machines is that we can see when the machine is biased and we can instruct the machine to correct for it. That seems easier than what we try to do with humans.

Q. What about AI/data failures? I don’t remember seeing any predictions on the whereabouts of Malaysia Airlines Flight 370. Facebook with all its resources couldn’t predict manipulation. How do we know that Google’s machines aren’t doing something that a human wouldn’t and they are unable to catch that?

There is always going to be random events that no one or no machine can predict. But these things are improving over time so I don’t think it is time yet to pass judgment on whether past failures predict future ones!

Q. A lot of people say AI is a fad or hyped up. With all the investments in computing and human power and better algorithms, do you believe that it will help all parts of the society and not only few top companies and their owners?

I think it is being hyped and there is a danger many companies will waste money on AI projects that don’t do much good. That is why we wrote our book. We want businesses to think before they leap into AI and work out what it can really do.

About the Author

A co-author of Data Science for Fundraising, an award winning keynote speaker, Ashutosh R. Nandeshwar is one of the few analytics professionals in the higher education industry who has developed analytical solutions for all stages of the student life cycle (from recruitment to giving). He enjoys speaking about the power of data, as well as ranting about data professionals who chase after “interesting” things. He earned his PhD/MS from West Virginia University and his BEng from Nagpur University, all in industrial engineering. Currently, he is leading the data science, reporting, and prospect development efforts at the University of Southern California.

  • […] Q&A With Joshua Gans, Co-author of Prediction Machines February 5, 2019 Ways Artificial Intelligence Will Disrupt Nonprofit Fundraising November 26, 2018 How to Create Automated Analysis Using R? July 3, 2018 […]

  • >