RealSense app challenge
Game on!
A friend of mine pointed me in the direction of the Intel® RealSense™ App Challenge 2014, with the comment "This looks like stuff you're into, doesn't it?". Indeed it did, or, rather, indeed it did look like stuff I'd like to be into.
GAME OVER
This is what I didn't submit to the competition:
Creative Senz3D.
- Take one of those Intel® RealSense™ cameras.
- Add a 3D screen (A 3D TV, Oculus Rift or Google Cardboard are just a few possibilities.)
- Combine those with a pair of force feedback gloves,
and you have a system that would let you manipulate virtual objects by hand and touch.
I've wanted one of those since I began playing with raytracers, as a modeling tool. This setup would, however, fit each and every one of the App Challenge's categories, not just 3D modeling:
- Gaming + Play (As a combined controlling and visualising system)
- Learning / Edutainment (The ability to physically alter virtual objects can help immensely in understanding how they work in reality)
- Interact Naturally (Well, it is, after all, pretty natural to manipulate things with your hands)
- Collaboration / Creation (Combine two of these systems, put them in different places, and share a pseudophysical workbench at each location. Collaborate and create away!)
- Open Innovation ("Move the sofa to the left, will you?" "Ok! *push*")
Physics
Of the things in the equipment list, I think the gloves are a bit tricky. There aren't too many half-decent, affordable gloves on the market. CyberGlove Systems CyberGrasp looks really nice, but I suspect they are a bit on the pricey side. The same company's CyberTouch (and the successor CyberTouch II) use fingertip vibrators to signal contact. I'd like a glove with, at least, fingertip feedback, but with inflatable cushions (or something similar; electroactive polymers could possibly provide a means to press against the fingertip with greater accuracy and speed), to make it possible to touch things lightly or more firmly, rather than just knowing that you're touching them.
Unless you have a whole force feedback exoskeleton arm, the experience would of course be very limited. It'd still be better than nothing.
Vision
Depending on the type of display device chosen, entirely different sets of limitations would be imposed on the system. If the display device is a 3D TV, the user would be tethered rather tightly to it, as it has to remain within the user's visual field, and all objects available for physical manipulation would have to be placed in the space between the user and the TV.
A head mounted display could enable a larger visual field to be used, and make it possible to walk short distances to change the perspective, but the user would still be locked into the area seen by the Intel® RealSense™ thingamabob. Given the somewhat small range of 0.2 to 1.2 meters from the camera, this could be a big problem. If a greater range is needed, the Kinect could be a better choice, with its range of 0.8 to 4 meters, but probably come with a resolution penalty.
Positioning
The contest rules mention that participants must show a good understanding of the device's API. I must confess that I haven't taken the time to read the specs, but I feel confident in guessing a few things:
There's a 3D sensing device that is able to identify eyes and mouth, and thereby probably able to determine the user's head tilt. Possibly swivel and pan, too. In the case of a head mounted display, swivel, pan and tilt could be determined using both devices in cooperation. The camera is able to determine, with a reasonable degree of accuracy, the user's X, Y and Z position in space, something that could be tricky to get from a head mounted display, and next to impossible if using a simple 3D TV and glasses.
One big if is if the 3D camera is any good at tracking such a head mounted display, or if you'd have to put a face mask on it…
Conclusion
There are several reasons I didn't submit any of this to the competition; one is that I wouldn't have the time or money to actually build the system even if I got the 3D hardware in my hands (but yes, I'd like to experiment with it), and another is that I get the feeling that I've expanded the scope way beyond what's expected, while at the same time having reduced the RealSense™ part of it all to a much smaller component than Intel® would like me to. What's their definition of "an app"?
Update
2015-01-23: Yes, this looks really nice: Microsoft's HoloLens. Gimme please?
Comments
There are no comments on this post.