The Computer Gets a 3D Interactive World

Computer-generated "Milo" interacting with a human using Project Natal's sensors

Computer-generated "Milo" interacting with a human using Project Natal's sensors

Last night Flo gave me a link to the video, below, from a trailer that was shown at “E3” (Electronic Entertainment Expo) this week that has gamers asking if they are seeing the next big thing.  I don’t have a Wii or Xbox or any other game controller hanging off my entertainment center, so I will leave the hyperbolic questions of how this will affect the gaming world to gamers.

However, if what we are being told in this demo is true – if the Project Natal sensor and its software are able to pick up our motion and emotion, as well as understand what we are saying, then games are only the entry point for this technology.

So Milo appears to be an interactive personality – what it will be able to do beyond that demo or how it will be marketed is still a mystery to me, yet it makes a pretty impressive demo so far.

A more generic video shows a number of other game-context uses for the Project Natal sensor

I could really care less about whether this works in games driving cars or blowing up stuff. I think that may all be cool and fun, and I appreciate that gaming is basically funding these innovations, but my interest is elsewhere.  I also realize that this isn’t really directly connected to Pais and how I interact in vitural worlds. However, it was through Pais that I got interested into related aspects of using the computer as a way to interact with other people. This is is now a way to interact with the computer is only partly related to SL, along with irony indicated in my title for this blog, seemed to make this somewhat relevant to this blog.

Remember the book “Snow Crash” that Philip Linden/Rosedale said influenced part of his vision for Second Life? When characters in Snow Crash jacked into the metaverse, it was not only to interact with other humans-as-avatars like we do in Second Life, but also with the computer – it was also was able to see and hear the 3D world of the humans as they interacted with the metaverse. One of the things from the book I have wanted to see in my reality is the vitual research assistant/librarian that Hiro Protagonist (the aptly named character in the book) interacted with to do his research on things like ancient Sumaria to figure out what was causing the snow crashes that were blanking the minds of programmers.

With the Snow Crash Librarian, we were able to imagine a computer with which we can interact without a keyboard or mouse – by having a conversation, moving, and gesturing – and it is also able to process between what we are talking about and the available libraries of human knowledge and data. So now, dear readers, justapose what we see in Milo with the another new thing in our world, Wolfram Alpha, and you start to see where I think this can be taking us. Add in what we should be seeing soon from the commercialization of DARPA’s PAL work that started around 2003, that we discussed in this video. Giving the computer 3D interactivity with us will likely make all these technologies change the way we interact with not only games, but our collective knowledge and information.

When Hiro interacted with the Librarian, it was in a virtual space, so as he did his research, he saved parts of that work (photos, objects, text, data, video) arranged in his library room… kind of like having an infinite desk/notebook that you can walk around in, so that as you are talking to the computer that geography of information and discovery can remain a tangible context.

I have been thinking a lot about using virtual worlds to communicate and collaborate with other people – a way to avoid the cost and time of travel to work with people in other places. I think we still need to be using avatars in one form or the other, since I don’t think the holograms we have seen are quite ready.  One of the questions is how well we can communicate as an avatar when so much of our face-to-face communication is non verbal. If Project Natal is really able to pick up emotions and gestures, then it could better animate our avatars, and vastly improve our human-communication bandwidth.

There are so many things I have been hoping and waiting for that seem more possible now than a week ago…

One more thing I will mention before I close this… I remember another Microsoft project from years ago, even before their attempt at an assistant called “Clippy”…. it was supposed to be a way that Windows could listen/see the user and use that information, combined with what we were doing with keyboard and mouse, to help it understand our situation. I have yet to see any of this, until now, perhaps. I do remember one of the things they said it might be able to do at the time… imagine we are using Word and it does one of those autoformats when we don’t want it to, so we curse at the screen – this assistant would sense our anger and realize it needed to do an “undo” – or better yet, apologize.

4 June 2009 Update: Someone made a Wikipedia page for this. More info and links there.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s