Headset Holodeck

OCULUS_ATTRIBUTION_ID:26554137500903587–

Headset Holodeck is my attempt to build a real-world XR system in conversation with one of science fiction’s most durable ideas: the Star Trek holodeck. Official franchise material traces an early version of the concept back to Star Trek: The Animated Series as a “recreation room,” then points to The Next Generation as the place where the holodeck became a signature part of the Star Trek universe. Paramount’s own corporate site says CBS Studios produces “global franchises like the Star Trek universe,” so I’m comfortable saying plainly that the idea and the term belong to that franchise lineage, and that this project is openly inspired by it. (Star Trek)

What I have wanted for years is not a loose collection of XR features, but a system that feels like entering a medium. Put on the headset, speak naturally, ask for a world, step into it, reshape it, save it, return later, and continue where you left off. The design work already ties together speech-intent routing, panoramic previewing, splat loading, floor-aware placement, object manipulation, multiple view modes, and persistent world configuration so the experience can behave as one continuous thing instead of a pile of isolated tricks.

It is voice-first, but not voice-only. Standard UI still matters. Sometimes the right input is a keyboard during development, sometimes it is a controller in-headset, and sometimes it is hands. Voice should feel like the front door, not the only door. Right now, push-to-talk is a temporary placeholder while I work toward the version I actually want: wake-word activation. You say “Computer,” the system knows you are addressing it, and then it listens. That obviously echoes one of Star Trek’s most familiar interaction patterns, especially the ship’s computer voice associated for decades with Majel Barrett-Roddenberry. (Star Trek). I have not cloned her voice, I chose a pretty minimal model, but one could do that, I suppose.

OCULUS_ATTRIBUTION_ID:26002853886060890–

I want the system’s responses to feel equally natural. Text-to-speech is handled on-device, which saves both time and money, and spoken feedback is paired with visible text status so the app stays legible even when the user is busy or half-focused on something else in the scene. In Star Trek, the computer’s replies are simply part of the environment; they do not feel bolted on. That is the effect I’m after here, just built with present-day constraints instead of fictional ones. (Star Trek)

One of the nicest parallels to Trek is the transition from request to environment. The franchise often treated the holodeck as a place that seemed to be there the moment the crew stepped through the door: Riker entering a forest in “Encounter at Farpoint,” Sherlock Holmes environments materializing for Data and Geordi, or the baseball field in Deep Space Nine’s “Take Me Out to the Holosuite.” A real headset app has to earn that feeling honestly, so Headset Holodeck uses panoramic previewing while the heavier splat data catches up. The user sees the destination arrive around them instead of staring at dead air. (Star Trek)

Beneath that surface, there is a lot of practical engineering whose job is simply to make the illusion hold together. Generated splat worlds are designed to go through floor analysis so they land correctly. You can also load local and remote splat files and panoramic images, and remotes are cached. You can export them from the device, which keeps it from being trapped inside a single content pipeline.

Once a world is active, I do not want the experience to stop at looking around. The system is being wired for commands that let the user scale, rotate, move, and otherwise direct what is in front of them, while still being able to switch among splat, panorama, and mesh-oriented views depending on the task. That feels very close to the way Star Trek gradually treated holodeck environments: not as painted backdrops, but as spaces that could be entered, adjusted, tested, and used. Voyager’s Fair Haven is a good example of that shift, a persistent program the crew returns to and modifies rather than a one-off effect. (Star Trek)

Persistence matters a lot to me. I do not want worlds to evaporate at the end of a session. Saved world configurations are planned to capture world source, prompts, lighting, placed objects, and related state so a scene can be restored, revised, and extended later. “Computer, save program”. That is one of the points where this starts feeling less like a demo and more like a platform. It also happens to be one of the places where the Star Trek comparison becomes more than cosmetic: holoprograms in that franchise are not disposable visual effects, but named places people revisit, share, and keep changing over time. (Star Trek)

The clearest bridge between present-day XR and the fictional holodeck, though, may be characters. Trek repeatedly pushed beyond scenery into inhabitants: Minuet in “11001001” felt unnervingly responsive, Moriarty in “Elementary, Dear Data” and “Ship in a Bottle” crossed into open self-awareness, and later holographic characters kept extending that territory. That is exactly why AI-driven NPCs sit high on my roadmap. A world becomes much more compelling when it does not just surround you, but answers back through someone living inside it. (Star Trek)

The same goes for social presence. Star Trek never treated simulated environments as merely solitary playgrounds. Holodeck and holosuite stories kept turning them into shared spaces, whether for leisure, competition, or full narrative scenarios. “Take Me Out to the Holosuite” is the easy example, but it is memorable precisely because the simulation becomes a place where a group can gather, interact, and build a shared experience together. That is why networking, VOIP, and avatars matter so much in my longer-term plans. (Star Trek)

Quick Facts:

State of the Art (Where it sits right now):

  • I’m prepping it for posting on Git Hub, so you can play with it yourself, and I’d be glad to get any help with it too!
  • It currently uses WorldLabs and OpenAI backends. Those keys are not to be shared, and I currently do not have a way to log into those from the running app. You can build them in or sideload those keys if you have them.
  • You can describe just about any world. Open AI will enhance the prompt which is sent to WorldLabs. You can also choose from four quality settings (each with speed and cost considerations)
  • The UI is very placeholdery. It needs to be redesigned and fit into the Arch.
  • You can summon the UI Arch by saying ‘Arch’
  • You can get back to the holodeck by saying ‘End Program’
  • You alter the size, position, and orientation of anything by name “Make the cube 23% larger” is a good example. Or since splats don’t come with metadata, “Make the world 10 times bigger. Move it down 1 meter”
  • Controllers will be optional, but currently need at least the right one for the push-to-talk button. And, the default UX is on them, that would need to be fixed.
  • The holodeck model was lifted from a 3d repository of free stuff. It needs attribution or licencing or replacing. (Yes I tried asking for such a thing from WorldLabs, but it wasn’t nice enough)

Here is the roadmap I have in mind from here:

  • Wake-word activation, so saying “Computer” opens a hands-free command flow in the same broad spirit as Star Trek’s voice-first interaction with the ship’s computer (Star Trek)
  • Integration with 3D libraries and generation systems such as Meshy, Tripo, and similar services, so arbitrary objects can be summoned directly into the scene
  • Integration with audio libraries and generation systems, including tools such as ElevenLabs, so worlds can acquire matching soundscapes, voices, and atmosphere
  • Allow integration of different backends for world and object generation
  • Fallbacks for if an AI backend does not exist. It should be able to do TTS and STT on device, and local/remote file loads even with no backends.
  • Image prompting through the outward-facing camera, allowing the system to use what the user is seeing as direct input
  • Mixed reality integration, so the “outside” of the holodeck becomes your actual surrounding world rather than a sealed fictional room
  • AI-driven NPCs that can inhabit a space as responsive characters, closer in spirit to Minuet, Moriarty, and Trek’s more memorable holographic personalities (Star Trek)
  • Avatar-system integration for presence and embodiment
  • Networking and VOIP, so these spaces can become shared social experiences instead of solitary ones (Star Trek)

That is the point of Headset Holodeck as I see it. I am not trying to copy a piece of science fiction cosmetically. I am trying to build an honest, modern, incomplete, but real version of an idea that Star Trek lodged permanently in people’s imaginations: an environment you can address, enter, shape, revisit, and eventually share. Not a perfect holodeck. A buildable one. (Star Trek)

GAME CHANGED: I didn’t care about 3D images or VR cameras – until I saw what Canon is doing with them

Quick hit

You ever have that moment where you realize you’ve been completely wrong about something? I grew up dreaming of the Holodeck, but I’ll admit—I didn’t care much about 3D images or VR cameras until Canon waved its magic wand. And BOOM, suddenly I can see a future of depth and dimension that has literally piqued my interest.

What dropped

So, what’s the scoop? Canon’s Dual Pixel sensors just became the superstar of the 3D content creation jam! They can convert regular 2D snaps into immersive 3D experiences. That’s right, folks. It’s almost like getting that “upgrade” for your life we always wanted. You can read more about it here.

Why it matters

This isn’t just tech for the sake of tech; it feels like we’re finally stepping into a world where 3D content isn’t a hustle. VR cameras feeling niche? Yeah, they WERE. But now? They’re becoming the *thing*. And guess what? It’s not just for the big studios anymore.

How to try it

What’s the first step to dive into this wondrous new world? If you have a Canon camera with those Dual Pixel sensors, good news! Start snapping those 2D pics and watch as they transform into a 3D digital playground. Seriously. GO TRY IT. You might just find yourself a holy grail of creativity you didn’t know existed!

Caveats / gotchas / bugs

But hey, before you dive headfirst into 3D nirvana, consider this: you might run into latency and compute issues when exporting those 3D files. Not to mention the extra gear or software you might need to get that next-level fidelity. So, keep one eye on your workflow and the other on your bank account!

My take

Honestly? I’m stoked that Canon is shaking things up. I can feel the gears in my creative brain turning. My next experiment? I’m going to dust off my Canon camera and see just how well I can spin my lifeless 2D captures into 3D fantasies. If the Holodeck is the dream, maybe I can create my own LITTLE version of it—one 3D frame at a time!

Sources

Digital Camera World

A Short History of My World Building (Part 1, Really Old Stuff)

This ain’t one body’s Tell. It’s the Tell of us all, and you got to listen it and ‘member, ’cause what you hears today, you got to tell the newborn tomorrow. I’s looking behind us now, down the long halls of history back. I sees those of us that got the luck and started the quest for building interesting 3D environments.

What Dreams May Come

I loved that movie. I thought it would be nice to ‘be’ in a Serat or a Monet painting. I used stock parts from Unity, their Asset Store, and some online 3D repo long forgotten to assemble this Thomas Kinkade world. I used a screen-space post effect to make it look like an impressionistic painting. This was running off a mid-range laptop in 2014 into a DK1, and it worked pretty well if I do say so myself. I added smoke and waterfalls and ambient audio to help bring it to life

TCHO Chocolate Factory

For FX-Pal (Fuji / Xerox Palo Alto), I did a digital twin of the Tcho Chocolate factory at Pier 15 (what is now the Exploratorium). It included working animations of all the machines and assembly line, as well as access to the actual pan/tilt/zoom cameras, and even ability to turn a light on and off.

The Great Dickens Christmas Fair

This has been an ongoing passion. In 2013 it was about using a PDF of the floor plan to build the major walls and structures, and stock objects to populate the world. This was clearly going to very labor intensive, and I imagined crowd-sourcing it, making a multi-user collaborative platform, where folks sharing my vision would contribute models and images to populate it out. But I also knew that future tech would make this approach a waste of time.

Castle Falkenstein

Ten years earlier, I’d coded up an Unreal-To-VRML (X3D) program. This was the original old-school Unreal Editor stuff. I’d found this lovely castle someone had built. I converted it to X3D and had fun using various VRML players to show it off on the web. After collecting dust, and writing an X3D-to-Unity converter, I was able to import this to Unity and run it in the DK1. I do sometimes revisit this and have it in Quest 3 standalone, it was on Altspace, and maybe going to VRChat or some such some day.

Vivaty

In 2006 I joined Vivaty, nee Media Machines. One of my favorite parts of building this web-based platform was giving life to smart objects. Here we see 4 puppies, each with its own personality. I made a system where you could dial in the probability of various animations, like ‘bark’, ‘play’, ‘sleep’, ‘beg’ etc. Other objects seen here: picture gallery (populated from FB, Google, MySpace, etc.), and a mirror.

360° Gaussian: Let’s Splat It

Quick hit

Imagine a dreamland where 3D Gaussian splats sprout from your 360° videos, like wildflowers in a virtual meadow. Welcome to my latest obsession: 360° Gaussian. This software is here to change the game for us graphics nerds!

I’m talking about automatically training these splats from video or image sequences without the hassle! (IT’S LIKE MAGIC BUT WITHOUT THE HAT.)

What dropped

So what’s in the toolbox? This bad boy is free—yes, you heard right—while the paid Automasker feature boasts standalone capabilities and CLI controls. You can extract images from equirectangular videos using FFmpeg. Excited yet? Let’s go!

Why it matters

If you’re like me, the idea of spinning 360° footage into slick 3D splats sounds like the coolest thing since the invention of the HOLDECK. We’re talking next-level creations that can make my virtual hotel room feel *like* a movie set without spending hours tweaking settings.

How to try it

Jump in! Download 360° Gaussian from this link and get cracking. You’ll use FFmpeg to extract frames from your 360° videos and then let the splats fly. Oh, and there’s a helpful video tutorial to guide you through it. Seriously, no excuses—your folio needs these splats!

Caveats / gotchas / bugs

BUT WAIT—there’s a catch. The workflow isn’t frictionless, and some of you might hit a wall with compute performance. Latency can sneak up if your setup isn’t beefy enough. Also, Automasker is paid, so budget accordingly unless you’re a fan of watching your bank account dwindle.

My take

Overall? I’m stoked! I’m diving deep into these splats to see how they interact with other elements in my 3D scenes. I’ll be testing its compatibility with various engines and tweaking my workflow to get the most delicious visuals possible. So, expect my updates as I embark on this splat-tastic journey!

Sources

Check out the developer’s page for 360° Gaussian here, and don’t skip the tutorial video and the Discord channel for community vibes!

New Feature Alert: Create 3D Models with SuperCool!

Quick hit

So, SuperCool just dropped a feature that lets you conjure 3D-printable models right in their interface. I mean, seriously? This is like giving my inner Holodeck a CONSTRUCTIVE workout! (Sorry in advance if I go full nerd on you.)

What dropped

Turns out, now you can transform wild ideas into actual physical objects—FAST. Hobbyists and pros alike can whip up designs without the usual fuss. I picture a wave of creative chaos, and it’s glorious.

Why it matters

This feature feels like the moment when my childhood dreams started shifting from pixelated fantasies to tactile realities. I mean, who hasn’t wanted to print out their doodles of transformer unicorns? The excitement is palpable!

How to try it

If you’re itching to dive in, check out the tutorial here. It’ll guide you through creating your first render. You might just find yourself knee-deep in 3D fun.

Caveats / gotchas / bugs

OH WAIT! Before you rush in, just know that the larger your model, the more compute power it requires. Think about latency and export formats—are you ready to scale this up and deal with those hiccups?

My take

What am I going to do about it? Time to gather my old sketches and put this feature to the test! Let’s see if I can bring some of my wild concepts into the physical realm, or at least end up with some very interesting hot messes.

Sources

Find more info about the feature in the original email newsletter.

Marveling at Marble: A Multimodal Adventure Awaits

Quick hit

Alright, hold onto your VR headsets because Marble is here! This new frontier multimodal world model just dropped from World Labs, and as a builder-nerd obsessed with Holodecks since my childhood, I’m practically vibrating with glee (cue the flashbacks of making my cardboard fort into a “simulation”!).

Marble isn’t just a pretty face. It’s a robust tool that can reshape how we interact with our 3D environments. So, yeah, this is a big deal!

What dropped

Marble launched recently, aiming to make the world of multimodal models accessible to all builders, dreamers, and rascals. Finally, a chance to create without feeling like you’re wrestling a bear in a sleep-deprived stupor.

Why it matters

For ages, I’ve yearned for tools that break down the barriers in immersive building. Marble promises to be that magical bridge—combining different data types seamlessly? YES… AND… it could actually cut down the guesswork in your builds.

How to try it

Want in on the action? You can jump right into Marble via their platform. Just head over to the World Labs blog for details. Think of it as your new playground where you get to run wild and experiment like a kid with a new toy. Seriously, what’s stopping you?

Caveats / gotchas / bugs

But hooooold your horses! There’s a caveat—you might run into latency issues when trying to process intense datasets. Nothing kills the vibe like lag, am I right? Gotta be mindful of that while you’re channeling your inner virtual architect.

My take

Honestly, I’m all-in on Marble. It’s like someone finally pulled out the missing pieces from my 3D puzzle. I’m ready to dive headfirst into exploring its capabilities. My next experiment? I’m aiming to integrate it into my current project and see if it truly lives up to the hype in terms of usability and functionality.

Sources

World Labs blog

About Quantum Leap Computing

I began programming 3D solutions in 1982 as a hobbyist. In 1999 I began writing tutorials for game tools such as Unreal Editor. In 2002 I went ‘pro’ doing 3D full-time after nearly 20 years in image processing and other computer-vision technologies (like augmented reality). I am available for contract design, architecture, consulting, programming, and trouble-shooting. I stay on top of the latest technology, currently working with my own zSpace display and Oculus Rift VR HMD. I am highly rated at http://answers.unity3d.com. I work with web technologies like WebGL and also mobile (Android, iOS, Blackberry). Let me know how I can help you!

Follow me on Twitter @darendash and @thatvrguy

Visit other sites I’m involved with:
davearendash.com
oculusready.com
realmofconcepts.com
spiralconcepts.com
ideabuilderhomes.com
da-voice.com