Stanford expert predicts AI of the future will be more of a help than a harm
Big data and artificial intelligence isn’t going to take our jobs or destroy the world, like some James Cameron fans would have you believe.
Instead, it's going to make our jobs easier and much more comfortable – but it will need to be policed, says Steve Eglash, Executive Director of Strategic Research Initiatives in Computer Science at Stanford University.
Dr Eglash will be one of the keynote speakers at this year's Melbourne Business Analytics Conference on July 19.
The future he predicts is less like Terminator and more like Her, in which intelligent systems help people overcome everyday challenges like sorting through thousands of emails – or even handwritten notes.
"Computers can now make sense not only of structured data like spreadsheets and relational databases, but unstructured and semi-structured data like free-form text, memos, reports, charts, graphs, images and that sort of thing," he says.
"This is leading to more choices in our lives and in our jobs because of the sheer volume of information AI can process for our own benefit."
What that could look like is specialists and professionals having AI assistants in the office, rather than being replaced by them.
"I don't think you're going to go to the clinic and have a computer diagnosing and treating you," he says.
"I think there's going to be a human doctor there, but they will have an AI system that can help her pull up data on patients like you.
"We're already starting to see the prevalence of things like Google Home and Amazon Echo, where you can talk to these things and give them instructions and get answers. That would have been a Star Wars fantasy just a few years ago."
How do you audit AI?
With data having become such a valuable commodity, there are enormous efforts underway to prevent anyone from abusing it – not just humans, but also the AI systems that sort it, Dr Eglash says.
"We have a lot of work to do if want to make sure these large, complex AI systems don't lead us to adverse and unintended consequences," he says.
The process of identifying where there's potential for abuse – and eliminating or reducing that risk – is a painstaking process and many of the best minds at Stanford and elsewhere are working to identify the right ways to manage it, Dr Eglash says.
One potential solution is to create special 'guardian' AI systems that monitor the other systems.
"These technologies are still young and there are a number of problems that people are still trying to figure out," he says.
"One is unintentional bias in the systems, another is how to make sure these AI systems will work properly even when they find themselves in an unexpected situation that they were never programmed to deal with.
"A related problem is how to make what's called 'auditable' AI. So, if we want to go back and figure out why an AI system behaved the way it did, how can we go back and do that?
"If a driverless car crashes, we want to be able to go back and understand what the state of the system was in the moments leading up to the event.
"All of these things are being developed now, and I think it's possible some of them will lead to an architecture where we have one system keeping an eye on other systems, which might lead to a situation which is more robust and reliable."
The Melbourne Business Analytics Conference will be held at Melbourne Business School on Thursday July 19. Speakers include Dr Eglash, City of Melbourne Lord Mayor Sally Capp and Professor Anindya Ghose from NYU Stern.