A new discipline for a new dimension
The Polyopticon interaction model unifies spatial navigation, temporal capture, and haptic input into a coherent language of work. We call this discipline temporal-haptics—the unique combination of physical manipulation and temporal storage that frees the physical and sensory systems to engage information in fundamentally new ways.
Temporal-haptics: The discipline
Temporal-haptics is a succinct way to capture the emergence of a broader discipline—designing systems that intrinsically augment user experience in a way that engages the entire sensory and motor system. It implies that all objects the device interacts with are likewise enabled to operate in three physical dimensions plus time.
What it captures
Relative speed and direction are captured. A much more detailed structure of the user’s processing emerges. Users can create their own associative physical vocabulary. The device can learn—it captures patterns, adapts, and responds.
What it enables
A physical repertoire is generated that can be task-specific, application-specific, or globally applicable. The device engages the user’s entire sensory and motor system, generating more definitive neural pathways as it’s used.
The method flow
A diagrammable process for working with information. Learnable, composable, and—critically—transmissible between people.
Configure→ Navigate→ Capture→ Replay→ Refine→ Encode→Share
The method is iterative: encoded agents become building blocks for new configurations. Shared timeprints seed new workflows. The system compounds expertise over time.
Theoretical grounding: Enactive heuristics
The interaction model rests on principles from enactive cognition—the insight that understanding emerges through action, not passive reception.
Sensorimotor coupling
Cognition is not separate from perception and action—it emerges from their coordination. Physical manipulation of the device directly affects information state. The body’s spatial intuitions transfer to digital navigation.
Situated meaning
Information takes meaning from context—its position in space, its relationship to other objects, its history of manipulation. Location matters. Relationship matters. Context is structure, not metadata.
Enacted knowledge
Expertise becomes encoded in gesture patterns and spatial arrangements. Skill lives in the body-device coupling, not just in declarative memory. What you know is inseparable from how you act.
Deep dive: From representation to structure+
The primary context for information technology has been to represent. The design language for software and display is essentially representational—computing systems evolved to allow humans to create representational artifacts.
Humans operate in the physical domain, the domain of here-and-now, in present focusing and functioning. This is a structural domain with its own physical language, fundamental and distinct from representational language. Yet this structural domain has remained elusive in terms of generating a design language.
What we propose is a way to capture the structural language of the user in the device. This doesn’t merely provide capture—it provides, intrinsically and by design, access to the structural dimension. Having the capability to interact in the present and capture the structural language that occurs around processing, generating, expressing, and storing provides a new ground on which representational activities can be developed.
Spatial primitives: The topology of attention
A user looking at the Polyopticon sees their primary task in the foreground. Related tasks occupy adjacent cells. Tasks, applications, functions, agents, and other objects of interest are rendered in relative positions in the background. As the user moves the device, contents shift in resolution and relative relationship.
Moving the device means shifting foreground to relative background. This accumulation of physically-related, task-related relationships is the key distinction. The ability to create an infinite palette of gross- and fine-grained relationships places a spectrum of interactive capacities in the user’s hands.
Geometric elements
Faces
Primary display and interaction surfaces. Each face hosts a poly-capsule—a workspace object containing streams, tools, or contexts. The currently-oriented face is the foreground.
Edges
Transition zones between faces. Rolling across an edge shifts context gradually, allowing preview before commitment. Edge gestures trigger cross-capsule operations.
Vertices
Convergence points where multiple faces meet. Vertex positions mark decision points and enable multi-capsule coordination. Resting on a vertex suspends context switching.
Poly-capsules: Workspace objects
A poly-capsule is not a window. It’s a manipulable object that encapsulates a coherent unit of work or attention:
Stream
A data feed, document, or media source—anything that flows or updates.
Tool
An application, function, or capability that acts on other capsules.
Context
A saved state, filtered view, or configured environment.
Agent
An encoded behavior pattern ready for execution or monitoring.
Capsules maintain spatial relationships as you navigate. Moving one capsule to a new face doesn’t break its connections to others—the topology persists. The device enables users to draw in application interfaces and objects and work with them appropriately, maintaining coherence across the workspace.
Haptic vocabulary: Touch as language
Touch isn’t just pointing. Pressure, duration, location, and combination encode distinct meanings—a physical vocabulary that becomes fluent with use. Users develop their own associative physical vocabulary; a whole space of interactive possibilities remains to be created.
Motion semantics: The body in the loop
How you move the device carries meaning. Rotation navigates space. Tilting adjusts parameters. The physical gesture maps to digital effect. The device captures more “depth” of the user’s pathways through the interface.
Rotation (roll)
Primary navigation. Rolling the device to a new face brings that capsule into foreground. The transition can be smooth (gradual roll) or discrete (quick flip).
Rotation (pitch/yaw)
Parameter adjustment within a face. Tilting forward/back or left/right modulates continuous values—volume, zoom level, timeline position.
Shake
Reset or undo. A quick shake returns to default state or reverses the most recent action. Intensity maps to scope of reset.
Rest
Setting down the device. Signals pause, saves state, or triggers docked mode if a base station is present.
Deep dive: Proprioceptive memory and neural pathways+
When interaction involves the body, memory becomes proprioceptive. Users don’t just remember that a function exists—they remember how it feels to perform it. The rotation that brings up the calendar. The pressure that confirms a decision. The gesture that triggers a morning briefing agent.
This embodied memory is more durable and faster to access than declarative knowledge of menu locations. Over time, Polyopticon interaction becomes fluent—a skilled practice rather than a cognitive task.
The device engages the user’s entire sensory and motor system, generating much more definitive neural pathways as it’s used, as well as capturing more “depth” of those pathways through the interface. This is not incidental—it is by design.
Temporal capture: Actions as first-class objects
Every meaningful action accumulates on the timeline. Not a log to be reviewed later, but a live structure you can navigate, edit, and compose. The device intrinsically allows the capture of experience and provides the user with an interface for developing highly personal, highly useful tools for generating nuanced output.
What gets captured
Significant actions
Not every micro-gesture, but meaningful operations: navigation events, selections, transformations, decisions. The system distinguishes signal from noise.
Contextual state
The capsule configuration, active face, selection state, and relevant parameters at the moment of action. Enough to reconstruct intent.
Temporal markers
Timestamps, duration, and sequence position. Actions have a place in time, enabling replay at original speed or scrubbing at any rate.
Decision points
Moments where alternatives existed. Branch points are marked, enabling exploration of paths not taken and comparison of outcomes.
Timeprint structure
A timeprint is a captured sequence—a first-class object you can manipulate. All the meta-information for sorting and searching processes extends into the time dimension; timeprints can be played forward and back to get a richer experience of how a user arrived at a given point.
Replay
Execute the sequence again, at original speed or accelerated. Watch the reasoning unfold.
Edit
Trim, splice, reorder. Remove false starts. Tighten the path from question to answer.
Branch
Fork at any decision point. Explore alternatives. Compare outcomes side by side.
Annotate
Add notes, tags, explanations. Turn captured action into documented method.
Compose
Combine timeprints into larger sequences. Build complex workflows from proven components.
Share
Export to collaborators. Give them not just results but the path—reviewable, reproducible, adaptable.
Agents: Encoded behavior
When a timeprint proves valuable, it can be encoded as an agent—a reusable behavior pattern triggerable through haptic or spatial input. Agents transform personal technique into executable capability. A physical repertoire is generated that can be both task and/or application specific, or globally applicable.
Agent types
Macro agents
Deterministic sequences that execute exactly as recorded. “Morning briefing”: rotate to news face, pull headlines, check calendar, summarize tasks. One gesture, reliable execution.
Template agents
Parameterized patterns that adapt to context. “Research sweep” takes a topic and applies a consistent method—source gathering, cross-referencing, summary generation—with the topic as input.
Reactive agents
Condition-triggered behaviors that run in background. “Alert on threshold”: monitors a data stream and surfaces a notification when specified conditions are met.
AI-augmented agents
Patterns that incorporate AI reasoning at defined points. The agent provides structure—what to look for, how to proceed—while AI provides judgment within that structure.
Agent triggers
Agents can be invoked through multiple mechanisms—linguistic structures mapped onto physical movements, freeing users from the constraints of making explicit these simple physical and structural activities:
Gesture
A specific haptic pattern—a signature touch sequence or device motion—triggers the agent. Personal gestures for personal agents.
Spatial
Navigating to a designated face or capsule configuration triggers the associated agent. Place defines behavior.
Temporal
Scheduled execution at specified times or intervals. Agents can run while the device rests.
Conditional
Data state triggers execution. When a monitored value crosses a threshold, when new content arrives, when a pattern is detected.
Chained
Completion of one agent triggers another. Build complex workflows from modular components.
Explicit
Direct invocation from an agent library or command interface. Browse, select, execute.
Resolution shifting: Fidelity as a dimension
Not all information needs equal attention. Resolution shifting lets you modulate fidelity—zooming in for detail, pulling back for context—without losing your place in the workspace. Contents shift in resolution and relative relationship as you navigate.
High fidelity (foreground)
Full detail, full interactivity. The current focus of attention. One capsule at a time receives this treatment.
Medium fidelity (periphery)
Reduced detail, limited interaction. Adjacent capsules visible enough to maintain awareness, quiet enough not to distract.
Low fidelity (background)
Minimal representation or hidden. Capsules continue running—agents execute, streams update—but don’t compete for attention.
The pinch gesture controls resolution: pinch to pull back and see more capsules at lower fidelity; spread to focus in on fewer at higher fidelity. The workspace zooms semantically, not just visually. This provides the ability to navigate through the full field of related objects while maintaining appropriate levels of attention.
Collaboration: Shared structure, not just shared files
Traditional collaboration shares outputs—documents, messages, results. Polyopticon enables sharing the structure of work itself: the path taken, the reasoning applied, the technique used. Users can virtually allow others to see what they’ve seen, go where they’ve gone, and arrive at a common frame of reference in a rich, physical-semantic fashion.
Timeprint handoff
Share a timeprint with a collaborator. They can replay your reasoning, see where you made decisions, understand not just what you concluded but how you got there.
Agent transfer
Give someone your agent—your encoded technique. They can execute it in their own context, adapt it to their needs, build on your method.
Workspace sync
Share a capsule configuration—the spatial arrangement of a workspace. Onboard a collaborator not just to information but to a way of organizing it.
Live co-navigation
Navigate a workspace together in real time. See each other’s focus, coordinate attention, work in parallel on shared structure.
Deep dive: From tacit to transmissible+
Most expertise is tacit—embodied in practice, difficult to articulate, lost when experts leave. Traditional knowledge management captures artifacts (documents, procedures) but struggles with the judgment and technique that produce them.
Polyopticon’s capture system makes expertise transmissible in a new way. When an expert navigates a complex analysis, their path is captured as a timeprint. When they encode a reliable method as an agent, their technique becomes executable by others.
This doesn’t eliminate the need for judgment—but it gives judgment a vehicle. Newcomers can replay expert paths, run expert agents, and build their own expertise on a foundation of captured technique rather than starting from scratch. The ability to pass along a model for a co-worker to inhabit your mindset—this is a unique capacity.
The learning curve
The interaction model is designed for progressive mastery. Start simple. Add complexity as fluency develops. The fundamentals work like any touchscreen device—just in three dimensions plus time.
Basic navigation
Rotate to faces, tap to select, hold to preview. The fundamentals of spatial navigation in a polyhedral form.
~30 minutes to basic competence
Haptic vocabulary
Pressure dynamics, gesture combinations, motion semantics. The physical vocabulary expands interaction range without adding visual complexity.
~1 week to comfortable fluency
Temporal mastery
Active use of timeline—scrubbing, branching, editing timeprints. Work becomes reviewable and refinable.
~2-4 weeks to integrated practice
Agent creation
Encoding personal techniques as agents. Building a library of reusable behaviors. Expertise becomes executable.
~1-2 months to productive authoring
Collaborative practice
Sharing timeprints and agents. Building on others’ work. Participating in shared workspaces and coordinated navigation.
Ongoing development
System extension
Creating custom capsule types, designing domain-specific agents, integrating external systems. Power user territory.
For developers and advanced users
Frequently asked questions
How is temporal-haptics different from gesture-based interfaces I’ve seen before?+
Most gesture interfaces map gestures to commands—swipe to go back, pinch to zoom. Temporal-haptics integrates gesture with spatial structure and temporal capture. The gesture doesn’t just trigger a function; it navigates a persistent topology and accumulates on a manipulable timeline. The entire design domain is imbued with this temporal-haptic model—all objects the device interacts with are likewise enabled to operate in four dimensions.
What does “four-dimensional object model” mean practically?
Development and adoption of a four-dimensional object model is a necessity to fully realize the capabilities the Polyopticon offers. Practically, this means that every object in the system—capsules, agents, streams, tools—carries temporal metadata. Objects have history. They can be scrubbed, branched, replayed. Relationships between objects are tracked over time. The workspace isn’t a snapshot; it’s a navigable trajectory.
Can I use Polyopticon with existing applications?
Yes, through capsule adapters. A capsule can wrap a web application, a document, a data feed, or an API. The device enables users to draw in application resources and widgets and plug in feeds in a seamless, low-overhead fashion. The interaction model applies to how you navigate between and coordinate these external resources, even if the resources themselves use traditional interfaces when focused.
How do timeprints handle sensitive information?
Timeprints capture action structure, not necessarily content. Redaction tools let you strip sensitive data while preserving the reasoning path. Sharing controls determine what’s included when a timeprint is exported. The system is designed for reproducibility within appropriate boundaries.