Come one come all! Starbreeze and OVERKILL invites you to an exclusive live-streamed event on Twitch.tv May 10 starting 9am PST /12pm EST/ 6pm CEST featuring first looks and previews, in depth interviews and fun for the whole community on a number of Starbreeze projects including: OVERKILL’s The Walking Dead, PAYDAY 2®, an introduction to the universe of System Shock 3®, Psychonauts 2, Dead by Daylight and a panel titled “Veterans of the Industry” featuring Warren Spector, Tim Schafer and Bo Andersson.
With a vision of VR serving as the future of computer interfaces, NVIDIA has set its sights on refining the rendering process – to increase throughput, reduce latency, and create a more mind-blowing visual experience for users.
That effort was the subject of a presentation Tuesday at the GPU Technology Conference, where Morgan McGuire, an associate professor of computer science at Williams College who will soon join NVIDIA as a distinguished research scientist, told attendees that there are significant challenges to overcome.
For instance, McGuire said future graphics systems need to be able to process 100,000 megapixels per second, up from the 450 megapixels per second they’re capable of today. Doing so will help push rendering latency down from current thresholds of about 150 milliseconds down to 20 milliseconds, with a goal of getting that down under one millisecond, approaching the maximum perceptive abilities of humans.
“We’re about five or six magnitudes of order [between] modern VR systems and what we want,” McGuire said during a well-received talk. “That’s not an incremental increase.”
What makes latency an even more pressing problem is the fact that as VR systems increase resolution, their throughput also increases, which fuels latency. So even as latency shrinks, the gains are often offset by the growth of the throughput.
“You can’t process the first pixel of the next stage until you’ve completed the final pixel in the previous stage,” McGuire said.
To bring latency down enough, McGuire said NVIDIA is, and will be, experimenting in many areas:
It starts with the renderer, which drives most of the latency. McGuire said that removing Path Tracer, which is the film industry’s primary rendering tool, from the process and replacing it with a combination of rasterization and GPUs, speeds up the rendering process.
NVIDIA research teams have found that eliminating post-rasterization tools such as shading and post FX increases throughput and reduces latency, but it also reduces image quality.
Eye-tracking software enables VR systems to deliver the sharpest resolution to whatever parts of an image the user is looking at, allowing the rendering process to deliver lower resolution imagery of for the rest of the display.
Breaking an image into many versions and angles of that image — like creating a bug’s view of an image — also brings down latency, but it requires a lot of throughput, just as if it were processing many images simultaneously.
NVIDIA researchers also have been testing the effectiveness of using a sheet of holographic glass that replaces the assortment of lenses and filters that a camera uses, enabling focus-on-the-fly by moving back and forth as the user’s eyes move to different parts of an image.
Will VR Kill the Keyboard?
Probably the most surprising part of McGuire’s talk was the subject of text. When he brought this up, audience members were momentarily confused, until he explained further that if VR becomes the gateway to augmented reality and the interface for consumer computer use, it will one day replace keyboards with some other tool. And that means that how text is entered and displayed becomes a major consideration.
In this scenario, McGuire said, “text is actually the killer app” for VR — definitely not what anyone in the room expected to hear.
Naturally, all of these improvements are likely to drive up the price tag for a desktop VR system, which have typically cost about $5,000. McGuire declined to speculate on what pricing of future systems might look like, but he made it clear it won’t be as dramatic as some might fear.
Blizzard Entertainment, Inc. has formed a dedicated division within the company that will handle management, operation, sales, and distribution for Overwatch® esports programs, including the Overwatch League™ and Overwatch World Cup. In addition to bringing together some of the most talented people from across the entire organization, this effort includes the full integration of Major League Gaming, acquired in 2015, into Blizzard and also leverages some of the best and brightest from traditional sports, esports, and entertainment.
The new division, which will retain the MLG name, will build on Blizzard’s nearly 20-year history as a leader in esports and leverage MLG’s extensive experience with live events and content distribution. It will also operate the MLG-branded media network, which will broadcast both Blizzard and Activision esports content as well as other premium gaming programming.
In addition, the associated teams and technologies will serve as the operational foundation, partnership hub, and media-production network for the Overwatch League as well as Activision’s Call of Duty® World League. This includes league management; team, media, advertising, and sponsorship sales; content development; and event production.
Blizzard’s separate esports team will continue to directly manage and operate the global esports programs for Heroes of the Storm®, Hearthstone®, StarCraft®, and World of Warcraft®, but will begin leveraging the new division’s capabilities for media production, sales, and distribution.
With these initiatives underway, Blizzard currently has multiple open esports positions, with more on the way. Apply now at jobs.blizzard.com, and stay tuned there for more esports openings in the weeks ahead.