MIT’s Interactive Dynamic Video

A GEARVR NEWS BULLETIN

by VRift720


MIT IDV - TOP.jpg

      If after reading the title of this article, you were thinking this was something like Facebook’s Streaming-Multi-Layer technology that lets you see 6K video (theoretically), then just check your hat at the door now.  This one’s totally different, trust me.

      M.I.T.’s Computer Science and Artificial Intelligence Laboratory have found an incredible way to manipulate objects in videos, bringing them to life without the help of computer graphics (CGI) trickery.  They use a traditional camera aimed at a real-world object (such as a tree) and use software that detects the most minute of vibrations in the object’s shape as it gets pushed and prodded.  These vibrations are then turned into an algorithm for that object.  As the database of scanned items grows, developers would be able to attach the closest algorithm inside their vast library to whatever object they wanted in a video and then have the end user be able to affect that object as if it had been modeled in the way-more expensive and time-intensive CGI approach.  The simulation moves the object, cleaning up the image so there are no gaps or distortions.  If a tree branch is pulled down, you don’t see white or black pixels where the branch used to be.

     Those “in the know” who have reviewed the research are saying this has the potential to change AR and VR dramatically in the coming years.  Right now, current games require the CPU to model and animate complex systems, attempting with heavy use of textures, to approach reality … which is very hard to do.  Android phones still can’t pull it off like desktop PC’s with their powerful GPU’s.  But … what if they could? 

      This new simulation layer would be able to interact with objects after the base-level rendering was already finished.  The new rendering mode would be like so:  first, create a model that didn’t need any bones or complex skeletal rigs.  Render the animation to the individual frames that normally go straight to the HMD display.  But instead, add a new step, an “IDV (Interactive Dynamic Video) Layer” interaction spread out across the individual frames, using the algorithm as interpreted by the user interaction) and then have the imagery get manipulated in post (over the frame by frame image output) to create the illusion of that object’s movement.  Let’s take a look at how it appears in practice:

      As you can see, the plant in this image is merely that, a single image of a plant.  But it appearing to get pulled and stretched and when released, it jiggles appropriately for that plant.  All that from a single image and a simulation running over it. 

      This is because MIT developed a way to scan the plant using a video camera that creates a Vibration Frequency Map (see the image at the top of this page).  That Map can let the application manipulate the plant in real-time, creating an animation from a single image.  If a user in AR were, for example, to drop a Pokemon creature onto the branch of a tree in the real world they are seeing through their lenses, the tree branch would appear to sink down under it’s weight. 

      The actual real-world object would be changed on the inside of the AR display, while doing nothing of course to the real world.  If you took off the HMD you would see the branch still up and un-altered, but in your display it would look bent down as the Pokemon’s weight would do if it were really there.  If the Pokemon suddenly fell off the branch, under this simulation, the tree branch would appear to snap back up with renewed buoyancy. 

      Since this is done on the image on a per-image basis, even if you walked around the tree as the animation occurred, the perspective change wouldn’t necessarily disrupt the animation’s progress.  The simulation would just get reworked up to the frame it had reached before you started moving and then it would just display the next iterative frame irregardless (but dependent on) your motion.  So the tree branch would continue to bounce through the 1.5 seconds of animation it had started even if you walked around it partially during that time.  In other words, the view would update properly in spite of your movement.

      AR and VR for mobile would benefit significantly from this technology, by trading its more heavier computations off for this lighter post-simulation process.  They are saying the gains would be enormous and give mobile a lot more potential for surprising visuals and illusions than what phones can currently produce .  It would be like taking a step into the future, and for VR fans, what could be better?

Well, even better than more words describing this, is a video that explains it all:

      I’m thrilled to keep seeing all of these wonderful developments erupting into the VR space without any warning which, all combined and the many uses explored by interested developers with keen imaginations, could get us to that magic place we want to be in VR much sooner than we ever expected.  These are truly exciting times, friends…


BACK TO MAIN PAGE:

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s