The UI for AI is MIA
The UI for AI is currently MIA: missing in action. The future interface of AI models is liquid: everything, everywhere, all at once.
Strip away the branding, and every AI assistant looks identical. A rounded text box greeting you with: “How can I help you?” The same interface that worked more than two decades for Google’s simple keyword-based search.
But we’ve outgrown this paradigm. As our prompts become more complex and our expectations higher, typing lengthy instructions every single time feels increasingly clunky. We need something better. Luckily there are a few contenders for a new wave of AI interfaces.
🔗 Node-based interfaces like Flora and n8n are having a moment thanks to agentic workflows. They’re powerful—think Blender or Ableton Live—but they’re also complex. Too much friction for everyday use. As Luke W points out, they won’t achieve mass adoption.
🎨 Infinite canvases like Napkin AI and Figjam offer freedom to type, draw, and create anywhere. But that blank canvas can be paralyzing. Where do you even begin? Great for exploration, limiting for efficiency.
💧 The future interface won’t be fixed, it will be liquid. Like Apple’s latest iOS, the UI flows and takes the shape of its context.
Think about how you communicate today. With friends, you seamlessly switch between text, voice notes, photos, polls, and reactions. The medium adapts the message and the moment. That’s liquid communication.
AI assistants will work the same way. The context will determine the optimal input method: whether that’s voice, sketching, uploading files, or typing. No more forcing every interaction through a text box.
Depending on your needs, AI will respond with:
- Interactive apps (like Claude Artifacts)
- Complete websites (like Manus and Lovable)
- Audio content (like NotebookLM podcasts)
- Video (like Veo)
- Learning exercises (like Khanmigo)
- and many more shapes.
The magic will be in the AI’s ability to choose the right “shape” for each interaction, making the experience feel seamless. The infinite memory and increased context window will the conversation feel natural. We’re moving from a world of rigid chat boxes to adaptive, context-aware interfaces that flow with our needs.