Google recently successfully put its DeepMind artificial intelligence system to work recognising eye diseases. With AI also being used to diagnose cancer, and the launch of AI-driven smartphone apps that can discuss symptoms and triage patients, it might sound like we're not too far from the creation of a fully fledged AI doctor.
Similar progress is being made putting AI to work writing software and evaluating legal contracts. AI has even started to make its mark in the creative world, generating artworks and fashion, evaluating graphic design, and helping people to create music. So does AI pose a threat to highly skilled jobs in the same way it does to ones that involve simple, repetitive tasks?
New technology has been making workers redundant for hundreds of years. But advances in the various industrial revolutions have also always created other new jobs, and people have been able to adapt. In the end, successful technologies were those where human and machine worked in harmony for the benefit of wider society. But the AI revolution may be different because it could affect as many white-collar professional roles as manual jobs.
That jobs involving lower skilled office work such as data processing will be reduced seems inevitable and, indeed, is already starting to happen. But AI is also invading the realm of highly trained professionals, such as software engineers. However, it's largely happening in a way that shows how AI will be more of a tool than a threat to skilled workers. It may become more common for professionals to be required to learn how to benefit from the power of AI, than for entire jobs to simply disappear.
Take computer programming. A long-standing dream of researchers is to develop an AI that can develop complex new software from scratch. The reality is that such a goal is probably a bit too far-fetched. The informal and subjective descriptions usually used to create briefs for programmers are just too imprecise for that.
But researchers have realised that AI can be very powerful at optimising existing software, for instance to make it faster and use less energy. Optimising software requires very specific skills, which most software developers actually don't have. So this kind of AI could be very helpful in improving software development without threatening existing jobs.
Researchers have started to develop novel ways for AI to detect various types of software bugs in realistic scenarios. And companies such as Facebook already use AI testing tools capable of revealing software crashes, a type of bug that causes apps to abort unexpectedly. These are vital tasks given how increasingly important software is to the smooth functioning of society, yet they are currently very hard for people to carry out.
We can predict a similar pattern in medicine. Many countries, including developed ones such as the UK, have a shortage of medical staff. For example, there are simply not enough well-trained doctors to be able to accurately diagnose the early signs of eye disease. AI could provide a solution to help human practitioners improve their capabilities. And with diagnosis taken care of, medical staff can spend more of their time actually caring for their patients.
This kind of technology could become even more helpful in countries where the number of well-trained doctors per head of population is much smaller, or for remote communities where accessing any medical advice is difficult. So the kind of research being carried out by Google DeepMind should be welcomed not feared by the medical profession.
We should use AI for the tasks that AI is good for, so that we can fill in gaps where there is a shortage of staff and where tasks cannot be easily automated. The challenge we do have is to equip students and professionals for the changes to their jobs that will occur and for the new, unknown jobs that AI will create in the future. With appropriate and timely training, fears of high-skilled job losses won't come true, enabling society to concentrate more on the benefits that AI will bring.
The modern jobs market is very competitive and continuous professional development is already vital in many professions. In some ways, this could place us in a better position than people at the time of the industrial revolution. But current educational trends towards examinations and mechanistic thinking are movements in the wrong direction. Regurgitating facts and processing data is what machines are good for. Creativity and critical thinking, meanwhile, have never been so important for humanity.
Written by Leandro L. Minku, lecturer in Computer Science, University of Leicester and Jeremy Levesley, professor of Applied Mathematics, University of Leicester. This article was originally published on The Conversation http://theconversation.com [2] republished under Creative Commons licence