UPDATED 22:30 EDT / JANUARY 15 2017

EMERGING TECH

How Stanford researchers are using deep learning AI to fight blindness

Microsoft Corp. announced last year that it was intent on curing cancer, using artificial intelligence to do such things as tracking a patient’s tumor. The year before that, IBM Corp. chose healthcare as a prime market for Watson, saying the AI-driven cognitive system would “revolutionize” the industry.

It’s clear that AI of various kinds will play a big role in healthcare in coming years. The question is how big and in what way. One group of researchers at Stanford University that has been part of this transformation for some time offered some clues in a recent interview with SiliconANGLE.

The group is using deep learning, a branch of machine learning that tries to roughly emulate how the brain works to learn and solve problems, to detect diabetic retinopathy. That disease, afflicting more than 350 million people worldwide, is one of the leading causes of blindness.

SiliconANGLE talked to Stanford Ph.D. students Apaar Sadhwani and Jason Su about the role deep learning is playing in keeping the disease from progressing to sight loss and how similar technology could be used to help treat other diseases in the future. This is an edited version of the email interview.

Q: In short, what exactly does the AI do?

A: We are developing a new automated process for reviewing images by leveraging advances in deep learning. By using deep learning, computer algorithms are no longer limited to detect a predefined set of conditions, but are free to adapt and learn as they gain more information.

By using the advanced computing power of the Amazon Web Services cloud, researchers are now able to deliver deep learning capabilities at fractions of the cost of using legacy infrastructure, enabling them to build a database of tens of thousands of images against which to calibrate the system and build a medical tool that may one day safeguard the quality of life of millions.

Q: Can you give us some background on the development of this technology?

A: Computed-aided algorithms for the segmentation and diagnosis/classification of medical images have been a broad area of research for decades. In some cases, like for breast cancer, these tools are actively being used by clinical physicians. However, the general consensus is that they do not work very well and do not provide much more insight than expert visual inspection. Recently, there has been a resurgence of interest in the area due to the success and advances of deep learning in computer aided image and pattern recognition.

For the first time in history, these techniques have surpassed human-level capabilities. However, for these deep learning algorithms to work, they require a large amount of data to train, often on the order of hundreds of thousands to millions of examples, as well as the hardware to crunch so much data. Our work was enabled by an Amazon Web Services grant we received that provided us access to their vast resources.

Q: How did it all start?

A: Our work on this problem began with the Diabetic Retinopathy Detection Challenge on Kaggle, which was sponsored by the California Healthcare Foundation and EyePACS. These organizations currently provide fundus imaging to under-served communities in California with a mobile camera that is brought to local neighborhoods. These images are reviewed voluntarily by ophthalmologists in their free time. There are many more images being gathered than there are expert eyes to examine them, so there is a great interest in being able to automate diagnosis.  With a $100,000 prize pool, a dataset with over 120,000 images and more than 650 teams worldwide, this was the first large-scale attempt at bringing deep learning to ophthalmology.

Q:  What stage is the technology at right now? Is it being used outside of trials?

A: This technology is currently being developed by many startups and tech companies, like Eyenuk, Remidio and DreamUp Vision. While this technology is orders of magnitude faster than humans, we believe that by spending more time on a given subject humans can achieve a more accurate diagnosis. Therefore, the best application of this technology is for screening large numbers of patients with the final diagnosis still made by doctors.

In the U.S. and many countries, there are significant regulatory barriers to bringing such a technology into the clinic. There are also challenges into how to integrate this into a physician’s workflow so that they are able to gain trust and confidence in the automated diagnosis, especially since in the past these algorithms have been lackluster.

Q: Can you give an example of its efficacy versus finding a correct diagnosis in the manual past?

A: The chief benefit of this particular technology right now is that it is incredibly fast, less than a second compared with one to five minutes for a human. Moreover, unlike a human, the results are consistent and there is no loss of performance due to the fatigue of going through a large number of images. This means that it can be used very effectively for screening a much larger number of subjects and to potentially refer only those that need expert review, close monitoring or interventional surgery.

Q: Can you explain how this machine learning tech works in this particular process?

A: Deep learning is a branch of machine learning concerned with the training and implementation of multi-layer neural network models. In more conventional machine learning, a great deal of effort is often spent incorporating expert knowledge to capture the relevant features of a dataset so that algorithms can then make accurate predictions.

This is typically referred to as feature engineering, a process which is often tedious, may potentially miss some patterns and not be robust enough to filter out noise encountered in realistic scenarios. Deep learning largely obviates the need for feature engineering as it uses highly expressive models that can learn the relevant features from training data. Such models, however, often have several million parameters which require substantial data, and in turn lots of compute power, for training them. Data and compute power are the two cornerstones of deep learning.

For example, given an image of a diseased eye, we want the model to output a high probability of disease while given a normal eye, we want this probability to be low. The knobs are turned slightly in the direction that favors the model to make the correct prediction for this image. This process is repeated for several thousand images of both diseased and normal eyes. As a consequence, the parameters of the model capture precisely those features of the input that make a difference to the output. In this case, it means the model automatically learns features of the retina that help discriminate between diseased and normal eyes.

Q: What impact do you think machine learning will have on healthcare?

A: We believe that medical imaging is ripe for a revolution with the advancements of machine learning. There will be barriers and resistance as is typical with the introduction of any new technology, and specifically in this case by the radiological community, but we feel that with the strength of these algorithms and the benefit they can provide by making expert disease diagnosis accessible to the entire world, it is inevitable.

This technology must become part of routine healthcare on purely ethical reasons alone. The alternative is to have millions of people slip through the cracks and be neglected due to over-burdened and/or prohibitively expensive healthcare systems.

Cancer is another area where this tech is also being actively developed. There is a new challenge ongoing currently for mammography.

Q: What obstacles are faced right now in the development and implementation of this tech?

A: The broad application of this technology to all of medical imaging and radiology is limited by several factors including computation (3D scans can be huge), algorithms (incorporating data from multiple image modalities, biopsies, chemical biomarkers, health records), and the diseases themselves (some diseases are not well enough understood to develop an effective algorithm with supervised learning).

One of the greatest obstacles right now is the “siloing” of data; hospitals are protecting their data and unwilling to share it widely. This can be due to privacy concerns or because the institutions are each developing their own technology and do not want to benefit competitors. Building a clean labeled dataset is also no easy feat and comes at some expense to hospitals as data are often stored in old, hard-to-search, archival systems (think tapes, DVDs and papers at worst).

The Obama administration started an initiative in traumatic brain injury that mandates NIH-funded researchers to transmit their data to a central database. This may provide a paradigm for greater collaboration and data sharing to enable the application of this technology to other diseases in the future.

Q: Is this technology something medical professionals in this field are aware exists?

Q: Practicing general physicians are probably unaware of the technology. Radiologists are probably somewhat informed about it but we imagine they are likely cautiously optimistic as their exposure to previous computer-aided diagnosis algorithms in the past has shown them to be mediocre. Medical imaging researchers at universities are likely the most keen on the technology.

Q: If this disease occurs mostly in the developed world, how could such tech be used in developing nations? What kind of infrastructure, skills, and equipment is needed?

A: This disease affects most diabetics, so it is a problem throughout the entire world. In countries like India or China, this technology will probably see the most benefit as the volume of patients is impossible to handle for the number of doctors. AlemHealth is one startup focused on delivering such tech to developing nations. The equipment needed to apply this technology is fairly cheap. AlemHealth is doing it with a small compute box they bring to hospitals that probably costs a few hundred U.S. dollars.

For a hospital-wide server installation, assuming that the imaging machines are already there, the primary requirements are simply a GPU (graphics processing unit, $50-500) and some networking infrastructure to deliver images and data. For a combined imaging and diagnosis platform, one can also conceive of even lower cost implementations like integration into a smartphone app with a portable ophthalmoscope or a 3D-printed camera assembly with a Raspberry Pi for computation.

Photo credit: Mireille Raad via Flickr

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU