LG will be showcasing at CES two large, mutli-touch LCD displays: one 52 and the other 84 inches. Yes, these are large displays.
So how does the multi-touch work? IR sensitive cameras (I believe they are mounted on the frame of the unit) are used to detect one or two points of contact. You can use fingers or writing/pointing instruments when interacting with the display.
Now it’s important to realize that the LG’s two-point multi-touch is different than that implemented in Microsoft’s Surface computer (which uses rear mounted cameras) and Dell’s forthcoming multi-touch which uses a capacitive sensor. The LG unit only supports two points of contact–like the iPhone. These other systems support more than two points of contact.
What I think is important about LG’s new touch-enabled displays is that this is one more signal that the touch era is upon. Not only do we have touch Tablets, and iPhones, and Surface, and an increasing variety of touch displays, but there are other technologies yet to make it to the market, such as those that embed imaging sensors directly within the display.
What does this mean? That there’s no better time than now to create a single model for touch gestures and actions. Unfortunately the market is going in all directions. There’s the Tablet’s SDK which supports limited gestures, such as the Flicks gestures, the new SDK from the Microsoft Surface group, and potentially now a new API from Dell for its multi-touch.
The Tablet group would be ideal to lead this effort. So where are they in this? The silence is overwhelming. What a missed opportunity.