How it Works

1) A Polyopticon user starts with default set of Poly-capsules – 32 distinct structures that house different applications, feeds and capabilities. From a base set of applicable structures, a user will build his or her own library of objects and capabilities, and will be able to annotate and encode the objects as applicable.

2)             The user has the capacity to replay timelines, which can then be edited. The playback of timeline objects in real time enables high fidelity recall of the mental, physical, and emotional processes that went into the creation of a discernable, potentially significant artifact. Such artifacts can be insights, key calculations or tallies of events,

3)             The user builds agents using haptics. User with a Polyopticon creates a timeline containing artifacts of application and feed interactions. The timeline contains the haptic motion that was applied to the timeline objects. The timeline objects portray past instances of work, and enable more ready recall of facts and the replication and exploration of different perspective which leads to new thought patterns.

4)             Generating haptic agents – 3-D macros – is a key enabling technology that leads to greater productivity. Rather than needing to translate a series of activities into a script that mediates action, the Polyopticon approach allows the researcher to use their own physical motion, and to thus embed his or her own conception of time into the process. This capability of storing and sharing temporal perception, and the literal capability to replay, annotate, and share one’s own perception of working as a team leads to making complex connections more quickly. The greater depth of metadata supports the design of agents automating activities within the user’s unique information processing matrix.

5)             The user engages with tasks in the foreground while the Polyopticon’s agent matrix manages the rendering and feeding of background tasks. The foreground tasks are activities primarily focused on the creation of agents – bodies of data and capabilities.

6)             A Poly-capsule can contain stored agents, or can represent a set of capabilities – an application palette- that allows agents to be built, stored and extended.

7)             The haptics of the device enable a much more  personal relationship with the device. The device captures and recalls the user’s sense of time – the congregated set of physical and cognitive cues and activities that contribute to the generation of insights.

8)             Device incorporates a “topsight” model to enable a medium-resolution, forced-perspective that allows the user to visualize all the individual capsules in the matrix. Topsight also allows an animated replay of activity representing the interaction both actively with the foreground objects as well as the interplay among background objects.

9)             The underlying agent mechanism enables a constant scan for novel and nuanced data from within the Poly-capsules. The OS will move background objects reflecting new feeds, providing a live perspective of the user’s data and information processing matrix.

10)          The interface can be thought of as a “3-D browser tabs” that contain the physical metadata that traces how its contents got to be in that state. The physical interactions that led to the state of each “tab” contained in a Poly-capsule can then be explored and a new set of timeline objects are created, creating a network of connections in which temporal relationships can be explored using motion and correlated with the novel display algorithms defined to drive the device.

11)          The device serves as a media and peripheral coordinating device, receiving and feeding input to external devices.

12)          Applications and data feeds residing virtually are processed by a specialized back-end algorithm that maintains coherence among the individual and collective timeline objects created.