Fighting Tuberculosis with GPUs and Deep Learning

For those in developing countries with tuberculosis, the difference between life and death often comes down to having a physician with the expertise to properly read chest X-rays. And the numbers show that many are dying of the disease unnecessarily.

TB has passed HIV/AIDS as the world’s top infectious killer, with the World Health Organization estimating that 1.8 million people died from the disease in 2015. Some 95 percent of those deaths occurred in low- and middle-income countries where access to radiological expertise is often minimal.

A pair of researchers at Philadelphia’s Thomas Jefferson University aim to change that. By combining their passions for chest X-rays and deep learning, Paras Lakhani, an assistant professor of radiology, and Baskaran Sundaram, a professor of radiology, may have opened the door to stemming the incidence of TB.

“A lot of developing countries just don’t have the resources to deal with these challenges,” said Lakhani.

Deep Dive Into AI

Fortunately, Lakhani’s decision two years ago to dive into deep learning may lead to a solution. He said he got “really obsessed” with the method, and read hundreds of papers before acquiring a GPU and building his own machine.

He then obtained public TB datasets from the National Institutes of Health, the Belarus Tuberculosis Portal, and Thomas Jefferson University Hospital, so he could start testing models.

Armed with an NVIDIA TITAN X GPU, and supported by the Caffe deep learning framework, CUDA and cuDNN, as well as NVIDIA DIGITS deep learning GPU training system, Lakhani and Sundaram trained a model using the more than 1,000 public TB images they had vetted.

Lakhani said the model ran 40 times faster than the benchmark tests the pair had run on CPUs.

The work could lead to healthcare providers in developing countries being able to upload chest X-rays, compare them against Lakhani and Sundaram’s model, and then accurately diagnose any anomalies.

Before that happens, however, there are additional challenges to overcome. For instance, chest X-rays are typically huge files of 2,500 by 3,000 pixels. And while GPUs can handle that level of resolution, the enormous amount of data tests the deep learning models.

Lakhani said he and Sundaram are constantly experimenting with ways to get around this — by uploading only portions of images with abnormalities, or by creating deeper networks, for instance.

“I’ve learned how to tweak the hyper-parameters much better, and basically build better models,” said Lakhani.

More Challenges on Horizon

The pair will also have to tackle an even more complicated challenge: The subtleties that are apparent in the largest X-ray files aren’t as easy to spot as the resolution shrinks.

As a result, Lakhani and Sundaram are working on achieving the right balance of resolution and data so that their models can detect the subtler indicators of TB infection.

That said, Lakhani understands there are limits to what the models can achieve.

“I’ve seen so many subtle examples of TB that I have my doubts that even models like this can catch it all,” he said.

Lakhani and Sundaram haven’t yet decided on how exactly they’ll put their model to work once it’s done, but they aren’t ruling out a commercial approach.

“I really don’t know what the best is to help the world,” said Lakhani. “Sometimes commercialization is best because you can reach out to people and work with nonprofits.”

What Lakhani is pretty certain of is that his work with Sundaram won’t stop with TB. He intends on taking what he’s learned and applying it to similar chest X-ray-related challenges such as chest fractures, pneumonia, lung infections, heart abnormalities and aorta issues, to name a few.

Feature image credit: Yale Rosen. Licensed via Creative Commons 2.0.

The post Fighting Tuberculosis with GPUs and Deep Learning appeared first on The Official NVIDIA Blog.

Intel, AMD Just Delivered Two Great Reasons to Upgrade Your Data Center

Think of it as a multiple choice test, with no wrong answers. Intel last week launched its new Skylake Xeon CPUs. AMD last month launched their next-generation EPYC CPU. These are both great options that strengthen the case for upgrading your server infrastructure now.

Whichever path you choose, the latest CPU improvements bring a boost to the value of accelerated computing. Fast CPUs let the work that should be done on serial processors get done quickly and not be limiting to the overall application performance.

This allows your GPU to handle all the number crunching without being held up.

And in the post-Moore’s law era, accelerated computing offers a path to big performance gains. Those gains, in turn, fuel rapid innovation in AI, scientific breakthroughs and visualization and data and analytics.

It’s an approach being used by the biggest web companies — Amazon, Baidu, Facebook, Google — serving hundreds of millions of people every day. It’s one being used by teams of researchers doing cutting-edge science. And it’s becoming a key tool for Fortune 500 companies looking to lead, rather than lag behind, the disruptions being caused by AI.

Upgrade Your Data Center and Reduce Costs

Accelerated computing is everywhere, and now, thanks to a new generation of processors, your technology options are better than ever.

NVIDIA’s approach to accelerated computing provides incredible performance gains — 1.5x or higher per year. Powerful server nodes can often replace 10 or more non-accelerated nodes and deliver the same performance, slashing infrastructure acquisition costs. On top of that, the savings in operational costs, including energy consumption and floor space, mean you can move with confidence when making the decision to upgrade.

The question is no longer if GPUs should be deployed, but how many. With more than 450 HPC applications — including 10 of top 10 — and all major deep learning frameworks optimized for GPUs, every data center focused on HPC and AI application workloads will benefit from deploying GPUs. And now high performance computing is coming to the enterprise.

Learn More

We’re here to help you get started. Read our white paper “Accelerated Computing and the Democratization of Supercomputing” to learn more.

 

The post Intel, AMD Just Delivered Two Great Reasons to Upgrade Your Data Center appeared first on The Official NVIDIA Blog.

The Fun’s Just Begun: Gamers, Modders, Researchers Piling into VR Funhouse

A quarter million VR enthusiasts have downloaded it. More than 25 communities have created mods for it. Researchers from Cornell University in upstate New York to Israel have even found new ways to meld the virtual and real worlds with it.

A year after its release, VR Funhouse — our first game — has been widely adopted by enthusiasts, researchers and game developers across the VR community. Gamers have been strapping on headsets to enjoy our wacky virtual funhouse everywhere from Greenland to Yemen, Nepal to Mozambique.

We built this game to show how sight, sound and touch can be brought together in virtual reality to create experiences with unparalleled levels of immersion. And VR Funhouse is packed full of NVIDIA GameWorks and VRWorks tech and built on Epic’s Unreal Engine 4.

But VR Funhouse has become more than just a showcase for our own technologies, it’s become a playground for VR developers around the world. We’ve released the game’s source code via GitHub, and late last year announced availability for Oculus Rift with Touch. In addition, we hosted a series of Twitch developer sessions we recorded to help developers get started.

We hosted a Game Jam with Epic and an online mod contest, resulting in 28 mods published on Steam. To inspire the community, we even released our own series of mods, including Winter Wonderland during the holiday season.

A Virtual Experience That’s Literally Hands On

Cornell University’s Organics Robotics Lab.

While VR Funhouse has gotten rave reviews from gamers, it’s about more than fun and games.

At Cornell University’s Organic Robotics Lab, researchers are working with NVIDIA to create kinesthetic haptics work by pushing against the user’s hands, or applying resistance to the hand’s motion — simulating grip and interaction with real-life objects. Cornell’s demo is built on VR Funhouse.

The collaboration uses Cornell ORL’s manufacturing process for Omnipulse, inflatable silicon controllers that can be powered by an air compressor or a small bike inflator CO2 tank.

The team has event built a “skin” for the HTC Vive controllers, including a dozen inflatable chambers, reacting to output from NVIDIA’s PhysX physics engine.

Virtual Rehabilitation

VRPhysio is leading another effort using VR Funhouse to blur the lines between the virtual world and the physical one. The developer, based out of Israel and Boston, is modifying VR Funhouse to support their virtual reality rehabilitation platform.

Its software, designed by physical therapists and game developers, uses VR Funhouse to help patients through neck and cervical spine exercises for upper-body physical rehabilitation treatment.

Try VR Funhouse for Yourself

These are just the latest example of our belief that great VR is about more than just great visuals. So strap in a headset and try it out for yourself. Download it today, or join the celebration and check our infographic, below.

The post The Fun’s Just Begun: Gamers, Modders, Researchers Piling into VR Funhouse appeared first on The Official NVIDIA Blog.

Psycho-Surfing: Startup Brings Artificial Emotional Intelligence to the Web

Text. Video. Pictures. Audio. We’re used to searching the web for different kinds of content. Now, one startup is striving to add very different kind of search category: emotion.

The first public pilot UK-based Emotions.Tech’s artificial emotional intelligence, launched in May, allows users to search according to how they want the results to make them feel.

Emotions.Tech CTO Paul Tero says the ability to analyse digital content according to the emotion it provokes can redefine our relationship with technology. Imagine online advertisers able to position their adverts on emotionally appropriate pages. Or a virtual assistant who can read your moods.

Understanding emotions, however, takes a lot of processing power. “We need that acceleration to keep up with the complexities of human emotion,” Tero says. To do that, Emotions.Tech turned to GPU-powered deep learning to rank, list and search web pages according to their emotional content.

The World’s First Emotional Search Engine

Searching for happiness? Thanks to AI, you can do that with a click.

The startup then teamed up with search provider Mojeek to create the world’s first emotional search engine. Mojeek users can now search the web and select results according to whether they’re likely to cause love, laughter, surprise, anger or sadness.

Unlike traditional sentiment analysis tools, Emotions.Tech’s solution doesn’t just count the number of positive or negative words in a text or parse the tone of the writer. Instead, they focus on the reader’s emotional reaction.

To do this, they listen to 1.5 million reactions on social media every single day. They then use this data to train artificial neural networks. The networks learn to predict what kind of emotional reaction a particular piece of written content might prompt in a human reader.

Keeping up with the Tweets

Social media platforms like Facebook and Twitter produce an incredible volume of information every day. That provides Emotions.Tech with plenty of training data to ensure its neural networks remain accurate.

But it poses a huge challenge in terms of processing power. “The more data we feed our networks, the better they get – that’s the power of deep learning,” says Emotion.Tech’s Tero. “We based our system on NVIDIA GPUs because they allow us to process each page’s meta tags in less than a millisecond, about 50 times faster than with CPUs.”

Developing Artificial Emotional Intelligence

Tero believes that artificial emotional intelligence has the potential to revolutionise how we interact with technology. And how technology interacts with us.

For now, the company hopes that their solution will start to change how we interact with the internet by making it more emotionally transparent. “Our ultimate mission is to protect people from emotional abuse and hate speech while they’re online,” says Tero.

Take Mojeek and Emotion.Tech’s offering for a spin at https://www.mojeek.com/emotions.

The post Psycho-Surfing: Startup Brings Artificial Emotional Intelligence to the Web appeared first on The Official NVIDIA Blog.

NVIDIA GRID Fuels Cloud-Based Workstation Experience from Amazon Web Services

Amazon Web Services today released GPU-powered Amazon EC2 G3 , featuring graphics acceleration powered by the NVIDIA GRID virtualization platform and NVIDIA Tesla M60 GPU accelerators. This powerful new GPU optimized instance allows architects, artists, product designers and other professionals to tackle the complex workloads that typically require high-end, desk-bound workstations.

With the Amazon EC2 G3 instance, users can work with photoreal models, complex visual effects and real-time 3D renderings — without being tethered to their desks. Instead, enterprises can allow their globally distributed teams to work from anywhere while collaborating in real time. When at their desks, today’s enterprise workers typical use multiple displays.  Paired with NVIDIA GRID software, the Tesla M60 supports up to four 4K displays, compared to just one display on the G2 instance.

Amazon EC2 G3 instances are available across the United States and parts of Europe to help high-end workstation users access applications like Dassault SOLIDWORKS, Autodesk VRED or ESRI ArcGIS from wherever they may be, all with better performance than the previous generation.

Learn more about deploying professional graphics from the cloud.

 

The post NVIDIA GRID Fuels Cloud-Based Workstation Experience from Amazon Web Services appeared first on The Official NVIDIA Blog.

AI Podcast: Hold the Mayo – ‘Not Hotdog’ Brings App from HBO’s Silicon Valley to Life

Move over, Pied Piper, there’s hotter tech in town.

If you didn’t get that reference from HBO’s Silicon Valley, tune in, because we’ve got the inside story on the latest gag on the show that everyone is buzzing about this year.

And it comes with a side of mustard.

On this week’s episode of the AI Podcast, developer Tim Anglade shares his experience developing “Not Hotdog,” an app that spawned as a joke on HBO’s Silicon Valley TV show, but is now available on iOS and Android stores.

Anglade, who also works as a consultant for the series, used NVIDIA GPUs, TensorFlow, and Keras to train the app. Users can now determine if any food or object they photograph is or isn’t a hot dog.

“It’s a defining problem of our time,” quipped Anglade.

In the show, “Not Hotdog” was developed when character Jian-Yang tries to create a food recognition AI app, only to have it backfire when it can only identify hot dogs.

To develop the app, Anglade relied on existing research and libraries covering image recognition.

“The concept of the app really is to use something that’s really, really smart to do something that’s really, really dumb,” said Anglade in a conversation with AI Podcast host Michael Copeland.

As a consultant, Anglade works to ensure that all aspects of the show — the engineering, the science, and the overall Silicon Valley culture — are accurately portrayed. Prior to consulting, Anglade was a “developer by trade.”

According to Anglade, he sometimes became uncomfortable while working on Silicon Valley due to how realistic the show depictions are.

“I think most of what ends up being on the show is actually very close to reality,” said Anglade. “Too close for comfort.”

 Just Keep Trucking

And if you missed last week’s episode, TuSimple CTO Xiaodi Hou explains how a driver shortage in Beijing inspired him to develop a driverless trucking platform.

How to Tune in to the AI Podcast

The AI Podcast is available through iTunes, DoggCatcher, Google Play MusicOvercastPlayerFMPodbay, Pocket Casts, PodCruncher, PodKicker, Stitcher and Soundcloud. If your favorite isn’t listed here, email us at aipodcast [at] nvidia [dot] com.

Listen on Google Play Music itunes

Featured image credit: C Watts, via Flickr

The post AI Podcast: Hold the Mayo – ‘Not Hotdog’ Brings App from HBO’s Silicon Valley to Life appeared first on The Official NVIDIA Blog.