LucidPast Hero
XR Design / Interaction Research / 2025

LUCID
PAST

Creating Virtual Dreams from Historical Memory

Role
Solo XR Designer & Developer
Duration
2-week design sprint
Scope
Proof-of-concept exploration
Output
Interaction framework, volumetric pipeline, validation prototype
Scroll
What if exploring history felt like dreaming? Where your curiosity guides the journey, themes emerge organically, and the same starting point leads different people to completely different insights.
01

The Problem Space

Archives as Unexplored Territory

The Library of Congress holds over 16 million images in its Prints and Photographs Division, with around 1.6 million digitized. Museums worldwide have similar collections. Yet most remain locked behind keyword search and chronological browsing - interactions designed for librarians, not storytellers.

XR enables natural, curiosity-driven exploration. But existing museum VR experiences are linear guided tours - just digital versions of audio guides.

Three core issues I identified

01
Discovery is broken.
02
Context is rigid.
03
Engagement is shallow.

The Insight

The brain holds personal memory.
Archives hold collective memory.

Dreams are how we explore one.
LucidPast is how we explore the other.

When you dream, your brain does not present memories in chronological order. It follows associations - a face leads to a place leads to a feeling. The result is not chaos. It is meaning, arrived at sideways.

Archives work the same way. The Library of Congress does not know which photograph will matter to you, or why. Keyword search forces you to already know the answer before you begin. LucidPast replaces the search bar with the dream - letting association, attention, and curiosity do what they have always done best.

02

Core Innovation

How Dreams Actually Work

I asked: Can we recreate this algorithmically?

The system I designed:

  • Gaze reveals intent

  • Intent triggers transitions

  • Transitions build themes

  • Themes create meaning

Black Mirror  -  The Entire History of You reference; Archives as Collective Memory
03

Design System Architecture

I broke the experience into three interconnected subsystems. Each solves a specific design challenge.

3.1

Persistent Object Transition

Challenge

Environmental changes in VR cause disorientation. How do you morph between completely different scenes without losing the user?

Solution

The gazed object becomes a spatial anchor - context morphs around it, eliminating disorientation.

3.2

Three-Act Progressive Narrowing

Challenge

Pure random exploration becomes meaningless. Pure curation removes agency. How do you balance both?

Solution

Three-act structure applied algorithmically - 100% open for 7 minutes, narrowing to 30% of options by minute 20.

3.3

Pathway-Dependent Interpretation

Challenge

The same photograph can reveal labor history, tech innovation, or cultural change. Institutions usually pick one.

Solution

Pathway priming activates different semantic networks - the same photograph means different things depending on how you arrived at it.

Contextual Priming Diagram

Interaction storyboard - 36 panels mapping the full experience flow before implementation.

LucidPast Narrative Storyboard
Click to Enlarge
PhaseDurationSystem BehaviorUser Experience
Act 10-7 minObserves gaze patterns, 100% open"I can go anywhere"
Act 27-13 minNarrows to 40% of options matching detected themes"This is getting interesting"
Act 313-20 minFocuses to 30% of options, clear thematic thread"Oh, this is about..."
04

Interaction Design

4.1

Gaze as Primary Input

Why gaze? I evaluated five alternatives:

Hand controllers → learning curve, breaks immersion
Voice commands → interrupts flow, socially awkward in museums
Hand gestures → requires conscious activation
Head pointing → neck fatigue
Eye tracking → already happening, zero learning curve, reveals unconscious attention
Dwell time calibration validation: Research shows 300-400ms causes accidental activation ("Midas Touch problem"). Apple Vision Pro's Dwell Control (accessibility mode) defaults to 1000ms - I validated this as the right threshold to avoid accidental activation.
4.2

Two Interaction Modes

Early concepting revealed a problem: "What if I'm looking at a rendering artifact or examining something out of confusion, not interest?" Solution: Users choose at entry, switch anytime.

Flow Mode
  • For maximum immersion
  • Pure 1-second gaze → automatic transition
  • Minimizes cognitive load
12% false pos. rate
Default (85%)
Intentional Mode
  • 1-second gaze + pinch gesture confirmation
  • Explicit gestures don't break flow state
2% false pos. rate
User Interaction Flow Diagram
05

Technical Implementation

Volumetric Reconstruction: Two Approaches Validated

Challenge: Creating explorable 3D environments from single flat photographs without fabricating content beyond what's visible. I validated two fundamentally different pipelines on the same test dataset of 10 diverse archival photographs:

Model Selection - Iterative Validation

Round 1 - Nov 2025
Depth Anything V2
Good generalisation across scene types. Soft edges on portraits - foreground subjects lacked sharp boundaries. Processing: ~1-2s per image.
Depth Anything V2 Model Output
Output Map
Scroll to load model
Drag to rotate / Scroll to zoom
3D
Eliminated - portrait fidelity insufficient
Round 1 Winner - Nov 2025
Apple Depth Pro
Metric depth accuracy. Sharp facial boundaries - eyes, hair, mouth edges resolved cleanly. Critical for archival portrait content. Processing: ~5-10s per image.
Apple Depth Pro Model Output
Output Map
Scroll to load model
Drag to rotate / Scroll to zoom
3D
Selected over Depth Anything V2
Round 2 - Dec 2025
SHARP (Gaussian Splats)
Released Dec 2025. Tested on same dataset. Photorealistic splats in under 1 second. Texture fidelity - film grain, tonal gradation - surpassed Depth Pro mesh output. Ethical constraint enforced architecturally.
Scroll to load splat
Drag to rotate / Scroll to zoom
SPLAT
Final selection

Pipeline A — Depth Pro + Mesh

1

Depth Pro depth estimation

~8s/image

2

Point cloud generation

~2 min/image

3

Poisson surface reconstruction

~15 min/image

4

Texture mapping

~3 min/image

Total:~20 minutes per image

Output:textured polygon mesh (.glb)

B

Pipeline B — Apple SHARP Splatting

1

SHARP inference

<1s/image

Total:under 1 second per image

Output:Gaussian Splat (.ply)

Why SHARP produces better results for LucidPast:

SHARP Gaussian Splat
SHARP Gaussian Splat
Depth Pro Mesh
Depth Pro Mesh
Test dataset validated across extreme conditions:
Migrant Mother
Close portrait
Migrant Mother
Migrant Mother
Close portrait
Migrant Mother
Helen Keller
Close portrait
Helen Keller
MLK gathering
Crowd scene
MLK gathering
Tesla double exposure
Unusual lighting
Tesla double exposure
Aldrin on moon
Extreme environment
Aldrin on moon
1940s diner
Interior architecture
1940s diner
Radio studio 1942
Interior architecture
Radio studio 1942
Diner detail
Interior (alt angle)
Diner detail
Gandhi
Group of People
Gandhi

Both pipelines generated convincing 3D across all conditions. SHARP showed measurably superior texture preservation in portrait-heavy content - the dominant image type in institutional archives.

Competitive Validation - Why Fabrication Fails
WorldLabs (marble.worldlabs.ai) - Tested

Single-photo to 360-environment conversion. Tested with Migrant Mother - the consistent benchmark image across all pipeline validation. WorldLabs uses AI image generation to approximate and fill invisible areas beyond the camera frame.

Result: Migrant Mother's face was completely altered and unrecognisable. The child was removed entirely. The background was fabricated as open ground with multiple trees - approximated from the partial background visible in the original.

The confirmed constraint

The moment a system generates content beyond the photograph, it loses the historical subject. This is not a technical failure - it is a category error. You are no longer showing the archive. You are showing an AI's interpretation of the archive.

LucidPast limits parallax to visible content only. What the photographer did not capture, the system does not invent.

WorldLabs output
Click to Enlarge

WorldLabs output - Migrant Mother's face altered beyond recognition, child removed, background fabricated from partial scene data.

06

The Narrative Sequence

2D Slideshow Validation

I built a functional slideshow with simulated gaze tracking to validate narrative mechanics before investing in XR implementation. Internal testing showed theme emerged clearly by image 7-8 despite no explicit narration.

Act 1: Observation
Migrant Mother (1936)

Migrant Mother (1936)

Gaze:child's worried face
Textile mill child laborer

Textile mill child laborer

Gaze:machinery
Ford assembly line workers

Ford assembly line workers

Gaze:workers' flat caps
Act 2: Narrowing
Newsboys selling papers

Newsboys selling papers

Gaze:economic systems forcing child labor
Depression breadline

Depression breadline

Gaze:department store mannequins in background
Store window display

Store window display

Gaze:mannequin's painted smile
Act 3: Synthesis
Mannequins with artificial happiness

Mannequins with artificial happiness

Gaze:smile itself
Beauty cream advertisement

Beauty cream advertisement

Gaze:manufacturing process
Makeup application behind-the-scenes

Makeup application behind-the-scenes

Gaze:Final reveal
Emergent Theme

Commodification of authenticity—society manufacturing happiness during Depression while real suffering exists steps away.

Emergent theme confirmed by image 7-8 without explicit narration. Pivot point felt surprising yet inevitable - the "dream logic" quality I was targeting.

07

Key Design Decisions

In Progress

User Validation & Testing

This section will be completed after conducting user testing with 3-5 participants following Nielsen Norman Group's qualitative testing guidelines. Content will include testing methodology, key findings on thematic emergence rates, narrative coherence ratings, and representative participant quotes.

09

Outcomes & Reflection

What This Project Validated

Design Hypothesis

Attention-driven sequencing generates emergent themes. Progressive narrowing balances spontaneity with coherence.

Technical Feasibility

Two volumetric pipelines validated. SHARP produces photorealistic splats in <1s, enforcing ethical constraints architecturally.

Interaction Pattern

Persistent object transitions reduce disorientation. Two-mode gaze+gesture system addresses false-positives without breaking flow.

Skills Demonstrated

Interaction Design

Spatial UXMulti-modal InputZero-UI Paradigm

Systems Thinking

Algorithm DesignSemantic Networks

Technical & Research

XR PrototypingML PipelinesUser Research Validation

What I Learned

Making history feel like memory

Image Sources

  • Archival photographs: Library of Congress, Prints and Photographs Division
  • Neil Armstrong portrait: NASA (public domain)
  • Black Mirror - Eulogy (S07E05): Netflix / House of Tomorrow - used for critical commentary
  • Depth map outputs and 3D reconstructions: author's own work