How AI Helps Keep NASCAR Drivers Safe

When your race car is flying around the track at nearly 200 miles per hour, anything out of order —  even a candy wrapper stuck to the grill — can pose a danger to both car and driver.

NASCAR racing teams want to know about everything going on with their cars, but their high speed makes them hard to see clearly. That’s why the teams have photographers snapping thousands of photos throughout each race.

And with 40 cars on the track at a time, finding the car you care about quickly in all of those images can be the difference the checkered flag and a disastrous fire.

Neural Nets Fighting Fires at NASCAR

Bryan Goodman, an engineer with Argo AI / Ford Motor Company, spoke at the GPU Technology Conference last week about how his team applies deep learning originally built to develop self-driving cars to detect images of specific race cars.

Classifying NASCAR imagesFord’s deep learning neural network was trained on a manual training set of thousands of images labeled by humans. Once it was trained, it started outperforming humans in identifying cars correctly, particularly when the number on the car was unclear or when the image was blurry.

“We still have humans annotating data for training and testing, but we’re at a point where the networks are doing better than humans, even when the humans are highly trained,” said Goodman.

Picking the right race car isn’t as easy as one might think. NASCAR cars are repainted regularly; sometimes each week, according to Goodman, as sponsorships and other design elements on the car change. But two things stay the same: the car’s number and its manufacturer.

Goodman’s team suspected these were the items that the neural network prioritized in order to obtain such good results. To find out, the team applied activated filters to the network to determine which elements were heavily weighted. As suspected, the number on the car and the vehicle manufacturer stood out.

ARGO AI inspecting the neural network“Sometimes I hear people describe machine learning and, in particular, deep neural networks as a black box,” said Goodman. “But that’s not accurate, because we can get a lot of information out of them.” (Read more about how NVIDIA is peering into our own autonomous driving neural net.)

The Road Ahead

In February, Argo AI and Ford announced they would join forces to strengthen the commercialization of self-driving vehicles. The Pittsburgh-based Argo AI team is now working alongside the autonomous driving team at Ford.

Just before the announcement, Ken Washington, vice president of Research and Advanced Engineering at Ford, wrote about the company’s plans for an autonomous future.

“Ford’s plan to develop self-driving vehicles isn’t just about freeing up time otherwise spent driving. It’s about making life better,” wrote Washington.

We agree, and we look forward to working with all of our partners to bring this future to life.

For more on NVIDIA’s full stack of automotive solutions, including the DRIVE PX 2 AI car supercomputer and the NVIDIA DriveWorks open platform for developers, visit NVIDIA.com/drive.

AI Podcast: Where Deep Learning Will Take Driving Next

Want to hear where deep learning is taking driving next? Check out episode 4 of the AI Podcast, featuring NVIDIA’s Danny Shapiro in conversation with podcast host Michael Copeland.

Feature image by Royalbroil, licensed via Wikimedia Commons.

The post How AI Helps Keep NASCAR Drivers Safe appeared first on The Official NVIDIA Blog.

AI Podcast: Using Deep Learning to Improve the Hands-Free, Voice Experience

“Ok Google;” “Alexa…”

We’re familiar with these commands that wake up, say, our Amazon Echo or Google Home.

But what would the future of intelligent devices look like if we could bounce from using Amazon’s Alexa to order a new book to Google Assistant to schedule our next appointment, all in the course of a single conversation?

The latest episode of our AI podcast dives into this question as part of a conversation with Kitt.ai founder Xuchen Yao.

Previously, developers would rely on either a clap or a button to trigger an intelligent device, which would negate the point of a “hands-free” experience. But Snowboy, a tool kit developed by Kitt.ai, solves this problem.

“Before Snowboy, there was no, like, open community solution in the market,” Yao said, in a conversation with our podcast’s host, Michael Copeland.

Yao noted the possibility of combining the services offered by various companies through their intelligent devices.

“In the future, I would not be surprised to see a device, a 4-in-1 device, that has all the bigger companies’ backend waiting to serve you,” said Yao. “And then people can choose whoever’s services to use just by selecting this trigger word.”

When Yao first tried to bring a conversational engine to the market in 2014 (Kitt.ai’s Chatflow), the closest thing that existed was Siri. But now, taking in the burgeoning development of AI technology, he’s optimistic that the next generation will experience more conversational and natural device interactions.

“I imagine in five to 10 years, we’re going to have the personal assistant everywhere we go,” he predicted.

AI Podcast: When AI Meets VR

And if one hot technology isn’t enough for you, how about two? On the previous edition of the AI podcast, we spoke with IBM’s Michael Ludden on how the pairing of the AI and VR will impact various industries.

How to Tune in to the AI Podcast

We’re available through iTunes, Google Play Music, DoggCatcher, Overcast, Podbay, Pocket Casts, PodCruncher, PodKicker, Stitcher and Soundcloud. If your favorite isn’t listed here — or you have an idea for an upcoming podcast — send us a note at aipodcast [at] nvidia [dot] com.

Listen on Google Play Music itunes

The post AI Podcast: Using Deep Learning to Improve the Hands-Free, Voice Experience appeared first on The Official NVIDIA Blog.

GTC Doubleheader: AI, Robotics Events Draws 300 Teen Technologists

Hundreds of Ph.D.s prowled the hallways of the eighth annual GPU Technology Conference in San Jose last week to catch up on the latest developments in AI. Also in attendance were hundreds of teens, getting their first taste of what AI can offer.

Students work with instructor during Techsplorer challenge
Students work with instructor during Techsplorer challenge.

Some 200 students, from half a dozen local middle and high schools, joined workshops explaining AI and robotics technologies, coupled with think-on-your-feet team challenges. Experimenting with AI and robotics applications also helps reveal a possible path to careers in science, technology, engineering and math.

The day’s first event was part of the new Techsplorer initiative led by the NVIDIA Foundation, our employee-led charitable arm, in partnership with science education nonprofit Iridescent.

The aim: expose underserved youth to cutting-edge technologies like AI, as well as their myriad applications and potential to transform industries.

“We had real design challenges. We had to create a big picture on a graph to think things through, and then it was all about the little details of the design,” said Bhan Pragya, 14, an eighth-grader at Ocala Steam Academy in San Jose. “The structure was too wobbly and kept falling, so we had to keep experimenting to make things work.”

Spoken like a true engineer.

Techsplorer: Three AI Challenges

Led by 20 NVIDIA volunteers, the students learned about AI and its applications before tearing into hands-on activities using rubber bands, paper plates, straws, paper clips, marbles and aluminum foil.

Student team work on challenge during Techsplorer event
Student teams work on challenge during Techsplorer event.

Pragya’s team focused on a challenge involving parallel processing during the half-day event. Other students tackled challenges focused on neural networks and self-driving cars.

Each team had to build a machine to process data using parallel channels, create a network that classifies information, or build a system of circuits to navigate around obstacles. While presenting their creations, students shared how they iterated their models, tweaked designs and worked through failures.

The students then toured the GTC exhibition hall, checking out demos of robots, drones and BB8, our AI car, before lining up to try out VR applications.

FIRST Robotics: Hands-On Labs

Later in the day, nearly 100 students participating in the annual FIRST Robotics competition, several sponsored by NVIDIA, came to GTC to participate in a hands-on lab using the NVIDIA Jetson TX1 Developer Kit.

Students from FIRST Robotics teams take part in hands-on labs.
Students from FIRST Robotics teams take part in hands-on labs.

Piling into a room set up with training stations equipped with a Jetson and giant monitors, students listened to talks on AI and deep learning, before diving into the instructor-led workshop.

Sharing tips and tricks, NVIDIA instructors explained how to apply deep learning technology to devices. The buzz of lively debate and discussion from students filled the room as they tested their programing.

NVIDIA’s involvement in FIRST and the Techsplorer initiative are part of its commitment to supporting STEM programs in K-12 education.

The post GTC Doubleheader: AI, Robotics Events Draws 300 Teen Technologists appeared first on The Official NVIDIA Blog.

What’s a Generative Adversarial Network? A Google Researcher Explains

If you haven’t yet heard of generative adversarial networks, don’t worry, you will.

The hottest topic in deep learning, GANs, as they’re called, have the potential to create systems that learn more with less help from humans.

Just ask Ian Goodfellow, who hatched the idea for GANs in 2014 when he was still a Ph.D. student at the University of Montreal. Now a research scientist at Google, Goodfellow explained the workings and whys of GANs to a rapt crowd at the GPU Technology Conference last week.

GANs remove one of the biggest obstacles to advancing AI, and particularly deep learning: the huge amount of human effort required.

Generative Adversarial Networks: “Most Interesting Idea in Last 10 Years”

AI pioneer Yann LeCun, who oversees AI research at Facebook, has called GANs “the most interesting idea in the last 10 years in machine learning.”

Typically, a neural network learns to recognize photos of cats, for instance, by analyzing tens of thousands of cat photos. But those photos can’t be used to train networks unless people carefully label what’s pictured in each image. That’s a time-consuming and costly task.

Cop vs. Counterfeiter: GANs Slash Data Needed for Deep Learning

GANs get around this problem by reducing the amount of data needed to train deep learning algorithms. And they provide a unique way to train deep learning algorithms to create labeled data – images, in most cases – from existing data.

Rather than train a single neural network to recognize pictures, researchers train two competing networks. Extending the cat example, a generator network tries to create pictures of fake cats that look like real cats. A discriminator network examines the cat pictures and tries to determine whether they’re real or fake.

“You can think of this being like a competition between counterfeiters and police,” Goodfellow said. “Counterfeiters want to make fake money and have it look real, and police want to look at any particular bill and determine if it’s fake.”

When Fakes Get Real: Competing Neural Networks

The sparring networks learn from each other. As one works hard to find fake images, for example, the other gets better at creating fakes that are indistinguishable from the originals.

NVIDIA founder and CEO Jensen Huang, who described GANs as a “breakthrough” during his GTC keynote, compares the process to an art forger trying to pass off imitations of Picasso paintings as the real thing.

“After training, what you end up with is a network that is able to paint like Picasso, and you have another network that is able to recognize images and paintings at an unheard-of level of discrimination,” he said.

That’s important for fields like medicine, where privacy concerns limit the amount of available data. GANs can fill in the missing data, making it possible to produce entirely fabricated patient datasets that are just as useful for training AI as the real thing.

“You don’t want to put the patient through test after test,” Goodfellow said. “You want to be able to take results of a few tests and generate more.”

How a Horse Becomes a Zebra

GANs have an artistic side, too.

Want to draw but have no talent? Using a type of GAN created by researchers at the University of California, Berkeley, you make a rough sketch of what you want, choose colors and instantly turn your scribble into a drawing.

Jun-Yan Zhu, a Ph.D. candidate on that same Berkeley team, demonstrates how to use a GAN to turn a picture of a horse into a zebra, an orange into an apple, a van Gogh painting into a Cezanne, and more.

GANs also generate high-resolution images from low-resolution ones and convert aerial maps into photos, and they make it possible to do all sorts of photo manipulation.

“You can do things like change all kinds of properties in a face – the color of the lips or the arrangement of the hair – but still make sure it remains a realistic face with very sharp color,” Goodfellow said.

GAN Challenges Remaining

Generative adversarial networks require additional research to reach their potential, Goodfellow said. Sometimes images generated fall short of resembling reality. And GANs are still far from being able to generate complex data.

“We’re really good at making a GAN that can create one kind of image,” he said. “What’s really hard is to create a GAN that can draw dogs and cars and horses and all the images in the world.”

The post What’s a Generative Adversarial Network? A Google Researcher Explains appeared first on The Official NVIDIA Blog.

AI Podcast: AI Will Enhance VR, Founder of IBM’s VR/AR Labs Effort Says

Romeo and Juliet. Peanut butter and chocolate. AI and VR.

“They are star-crossed lovers, they’re destined for one another,” Director of Product for Watson Developer Labs Michael Ludden quipped.

During last week’s GPU Technology Conference, Ludden swung by our podcast booth to discuss how the pairing of the AI and VR will impact various industries.

“AI is going to be used left and right in VR — it already is — and we’re just hoping to play a part in the underlying technology that makes it easier for developers to do that,” Ludden said in a conversation with Michael Copeland, the host of NVIDIA’s AI podcast.

Ludden is also the founder of the VR/AR Labs initiative, which is part of Watson Developer Labs at IBM. The group provides tools for developers to create new tools on the IBM platform that explore different usage opportunities for AI, VR and AR.

“I’d love to see people build and advance an emotional story with a virtual partner in a game that’s story based, or something with a productivity tool where maybe you’re using Autodesk or Tiltbrush,” Ludden said. “Really, the sky’s the limit.”

If you have your own idea on how VR and AI can be used together, don’t hesitate to reach out to Ludden.

“I’m still looking for the next killer use-case, and so if anybody who’s listening to this podcast is interested in sharing what they’re working on, in terms of AI and VR, I’d love to collect that and bring that into my talk going forward,” he said.

AI Podcast: Deep Learning Cooks Up the Perfect Meal

And if you missed our other podcast from GTC last week, and you like to eat, tune in: We spoke with Hristo Bojinov, the CTO of Innit, a startup that wants to use AI to deepen our relationship with our food.

How to Tune in to the AI Podcast

We’re available through iTunes, Google Play Music, DoggCatcher, Overcast, Podbay, Pocket Casts, PodCruncher, PodKicker, Stitcher and Soundcloud. If your favorite isn’t listed here — or you have an idea for an upcoming podcast — send us a note at aipodcast [at] nvidia [dot] com.

Listen on Google Play Music itunes

The post AI Podcast: AI Will Enhance VR, Founder of IBM’s VR/AR Labs Effort Says appeared first on The Official NVIDIA Blog.

Guardian Angel: Why Your Next Car May Have an AI Co-Pilot

Self-driving cars are meant to keep us safe. But even if you drive yourself, AI could be looking out for you.

If configured to do so, AI, like a guardian angel, could even take over the car.

NVIDIA CEO Jensen Huang showed how that could work last week, during his keynote at our GPU Technology Conference, in Silicon Valley.

Our AI Co-Pilot technology uses sensor data from the cameras and microphone inside and outside a car to track the environment around the driver.

When the AI notices a problem — perhaps that the driver is looking away from an approaching pedestrian — it could sound an alert.

In the demo, the vehicle’s AI system notices another car is about to run a red light. It then deactivates the throttle for the driver — potentially preventing a collision.

Cloud-to-Car HD Mapping

Another AI can help keep people out of harm’s way is by helping drivers and cars anticipate what’s ahead with the creation of high-definition maps.

Huang also showed how NVIDIA uses deep learning and the NVIDIA DriveWorks SDK to create HD maps and keep them updated. The video below shows how it works.

A car first drives and scans the world with cameras, radars and lidar. Mapping systems analyze and apply deep learning methods to the data collected through the scans. Through this process, road features are detected, enabling the creation of an HD map.

The finished map is sent back to the car, allowing it to determine the vehicle’s location. The object detection — through DriveWorks deep neural networks — takes place onboard the NVIDIA DRIVE PX 2 AI supercomputer in the car, identifying pedestrians, cars, bicycles and other objects.

Whether you’re driving your car, or it’s driving you, AI is making the road safer for everyone.

AI Podcast: Where Deep Learning Will Take Driving Next

Want to hear where deep learning is taking driving next? Check out AI Podcast episode No. 4, featuring Danny Shapiro in conversation with podcast host Michael Copeland.

The post Guardian Angel: Why Your Next Car May Have an AI Co-Pilot appeared first on The Official NVIDIA Blog.