All About the Polyopticon, Take 1

A user looking at the Polyopticon™ sees their primary task (application, function) in the foreground. Related tasks occupy adjacent cells. Tasks, applications, functions, agents and other objects of interest are rendered in relative positions in the background. As the user moves the device, the contents in the foreground cells (relative to the user’s visual field, which is perceived via the head-mounted visual field detector), the contents shift in resolution and relative relationship.

Moving the device means shifting foreground to the “relative” background. This accumulation of physically-related task-related (related per the user’s input through moving the device) relationships is the key distinction of the device. The ability to create an infinite palette of gross- and fine-grained relationships places a spectrum of interactive capacities in the user’s hands.

Relative speed and direction and captured and a much more detailed structure of the user’s processing is captured. Users can create their own associative physical vocabulary. The device can learn; it can capture the user’s patterns and adapt and respond.  A physical repertoire is generated and that repertoire can both be task and /or application specific, or it can be globally applicable. Again, the device engages the user’s entire sensory and motor system, generating much more definitive neural pathways as it is uses, as well as capturing more “depth” of those pathways through the interface.