Essay · January 2023 (updated March 2026)
When I was a kid, I wanted to build a holodeck. What we're entering now is something better: the direct-from-imagination era. You'll speak entire worlds into existence.
The Holodeck Vision
The holodeck from Star Trek combined immersive 3D simulation, natural language interfaces, and persistent worlds. Nearly everything about it is rapidly becoming reality. Not through a single device in a room, but through the convergence of generative AI, parallel computation, compositional frameworks, and the open web.
To build a holodeck, you'd need three things: a way to generate and compose ideas from natural language, a way to visualize experiences with physics-based rendering, and a way to maintain persistent worlds with data, continuity, and rules. All three are arriving simultaneously.
Generative AI as World Engine
Large language models can be conceptualized as virtual world engines. ChatGPT can dream virtual machines and text adventure games. Stable Diffusion and its successors reduce the cost of visual creation by orders of magnitude. Neural Radiance Fields (NeRF) generate 3D scenes from 2D photographs through "inverse ray tracing." And text-to-3D systems are translating natural language descriptions into navigable spatial environments.
In January 2026, Google DeepMind released Project Genie—generating navigable interactive 3D environments from text prompts. It's limited today: 60-second experiences at 720p. But the trajectory from "walking simulator" to "playable game from a prompt" is far shorter than the trajectory from "no computers" to "Pong." The hard conceptual work—turning language into interactive spatial experience—is done.
The Exponential Rise in Compute
The direct-from-imagination era is powered by an exponential rise in parallel computation. Most 2022 phones had computing power 100 million times faster than the Apollo guidance computer. By 2027, global compute at the start of 2023 will look like a rounding error. This isn't just cloud-based capacity—edge devices and local hardware are improving at the same exponential rate, enabling on-device AI inference, real-time ray tracing, and vast simulated worlds running right in front of you.
Compositional Frameworks
Before AI-powered creation, compositional frameworks already demonstrated the power of enabling creativity at scale. Dungeons & Dragons—the first metaverse—proved that shared imagination and storytelling could be the substrate, with technology as the delivery mechanism. Minecraft showed that a sandbox could become an entire platform for composability. Roblox demonstrated that a community of creators could build a multiverse of experiences. 3D engines like Unreal and Unity put physically-based rendering in the hands of smaller teams.
AI is now adding a new layer on top: the ability to start from what you want the experience to be and let the technology handle implementation. This is the top-down approach that works—creative vision driving implementation, not infrastructure driving design.
Beyond the Holodeck
The direct-from-imagination principle turned out to be more universal than I expected. In 2023, I was thinking about games and 3D worlds. But the same principle—imagination as input, with AI handling translation to reality—applies to software, commerce, content, and every creative act. A founder who describes the product they want and has AI agents construct, test, deploy, and scale it is speaking a world into existence. It's just a world made of APIs and databases instead of polygons.
The metaverse of multiverses beckons.
Read the full essay: The Direct from Imagination Era Has Begun on Metavert Meditations.
Related: Software's Creator Era · Semantic Programming · Games as Platforms