LTTS’ Chest-rAi™ and Intel® AI Analytics and OpenVINO™ Toolkits: Redefining Chest Radiology Outcomes

Get the Latest on All Things CODE

author-image

By

LTTS’ Chest-rAi™

You may be wondering: what is Chest-rAi? It is a deep learning algorithm that detects and isolates abnormalities in chest X-ray imagery.

The beauty of Chest-rAi is that it's fast and accurate—its accuracy rate is 95%, significantly higher than most other methods currently available.

To further enhance the algorithm’s capabilities, we adopted the Intel AI Analytics and OpenVINO toolkits. Doing so, we were able to optimize the inference pipeline for Chest-rAi, making it 1.84 times faster on lntel Xeon Platinum 8380 CPU @ 2.30GHz (Ice lake) compared to competing CPUs without the Intel toolkits.

Chest-rAi™ and CARES: A New Paradigm

To address the growing demand for trained radiologists and minimize the risk of errors, LTTS’ Chest-rAiTM solution leverages a novel, deep learning-based architecture: Convolution Attention-based sentence REconstruction and Scoring, or CARES. The solution has been found to be effective for the identification and localization of radiological findings in chest X-ray imagery.

Chest rAiTM generates a clinically meaningful description to aid radiologists, delivering:

  1. CNN (convolutional neural network) feature extraction,
  2. Ocular opacity and anatomical knowledge-infused attention-based graphs,
  3. Hierarchical two-level LSTM (long short-term memory)-based reconstruction module, and
  4. Pre-trained transformers along with clustering and scoring models to help generate more grammatically and clinically accurate reports.

The solution is combined with a novel scoring mechanism—Radiological Finding Quality Index—to evaluate the exact radiological findings, localization, and size/severity for each such term present in the report generated.

CARES embraces a linear approach, leveraging attention-based CNN for image feature extraction, multilabel classification, and multiple visual attention-based LSTMs for generating the report. Figure 1 illustrates the scheme.

Figure 1: Convolution Attention-based sentence REconstruction Scoring (CARES)

The multi-task learning framework for multi-label classification leverages a common CNN-based base feature extraction model for all labels. This was backed up by an Attention layer, Global Pooling layer, and a Fully Connected layer for generating predictions for each label. Each label has a separate FC layer ( fully connected layer), which enables the model to generate predictions for multiple labels simultaneously without one impacting the other.

A separate attention layer is used for each tag. This is done to ensure the model can attend to different regions of the image simultaneously and generate feature maps relevant for the given label, helping derive a two-fold benefit, viz:

  • Improved classification accuracy, as the output of each attention layer is focused only on the part of the image relevant to that label, and
  • A better convolutional feature map (encoded image features) for the decoder.

LTTS Optimizes Chest-rAi With Intel® AI Analytics Toolkit & OpenVINO™ Toolkit

You may be wondering how LTTS is able to optimize turnaround times for Chest-rAi with the OpenVINO and AI Analytics toolkits. Let us explain.

First, LTTS leveraged a set of extensions from the AI Analytics Toolkit that made it easy to deploy PyTorch models on Intel® processors. By doing so, the LTTS team was able to create an optimized inference pipeline and reduce …

  • Turnaround time for Chest-rAi from 8 weeks to just 2 weeks, and;
  • Chest-rAi model (FP-32) size by approximately 39%, and;
  • Inference time by 46%.

Most of the gains were observed in Densenet 121 and Densenet 169 family architectures. The optimized inference pipeline for Chest-rAiTM was 1.84 times faster on lntel Xeon Platinum 8380 CPU @ 2.30GHz (code named Ice lake).

Second, LTTS adopted the OpenVINO toolkit, a suite of tools that helps developers optimize performance of deep learning workloads across a variety of different devices. This toolkit is based on the Intel® Deep Learning SDK, which has been optimized for use with Intel processors. To strengthen Chest-rAi’s capabilities, LTTS chose OpenVINO toolkit and Intel® Extension for PyTorch (IPEX) as software components to optimize Chest-rAi models. These optimizations helped LTTS developers reduce the size of the models, driving scalability without increasing or upgrading the hardware.

The Results of LTTS' Optimization

The OpenVINO and Intel AI Analytics toolkits have helped streamline and drive improvements in the turnaround time for Chest-rAi diagnosis. Since a quicker response time can have a significant impact on the overall health diagnosis for the user, this has been a vital and transformative impact on the solution capabilities. As per LTTS:

“Not only are we faster, but we're also more accurate. Our Chest-rAi algorithm is now able to achieve an error rate of less than 1%. We're proud of the work we've done, and we think you'll be impressed with the results.”

Aniket Joshi, Delivery Head, LTTS

How You Can Use OpenVINO and Intel AI Analytics Toolkits to Optimize Your Own Models

Let's take a look at how LTTS used these kits to speed up the inference pipeline for Chest-rAi:

First, LTTS’ engineering team used OpenVINO to create a custom acceleration package that optimized the performance of the model as shared above.

Second, they used the AI Analytics toolkit to convert the model into a format that could be run on their infrastructure. This optimized the inference pipeline for Chest-rAi, making it 1.84 times faster (compared to the non- Intel-optimized models) on lntel Xeon Platinum 8380 CPU @ 2.30GHz (Ice lake).

The takeaway is that you can use either the OpenVINO or AI Analytics toolkits to speed up your own models; you’ll find success no matter which one you use. So, if you're looking for a performance boost, these are two tools you should definitely check out.

What's Next for LTTS and Chest-rAi?

LTTS continues to leverage the capabilities of Intel AI Analytics and OpenVINO toolkits to optimize turnaround times for Chest-rAi. We're seeing some really impressive results with this combination and are committed to making sure that our customers get the best possible experience.

LTTS is also looking into ways to further reduce the size of the Chest-rAi inference pipeline. This is something that has been a challenge for us, but we're confident that we can find a solution that meets our high standards.

And finally, LTTS along with Intel AI tools, solutions, and frameworks is exploring new ways to improve our products and services.

Learn more at: