Building Anaspace:
18 Hours, 48 Commits,
and a Character Grid
That Shouldn't Work

The development story of a cultural exploration engine built with Claude Code for a hackathon — told through the specs, missteps, and moments of clarity that shaped it.

18Hours
48Commits
9,144Lines of Code
12Services
[ SCROLL TO BEGIN ]
February 13 · Pre-Code

The Premise Nobody Asked For

Anaspace started with a question that sounds deceptively simple: what if you could Shazam a song and immediately see the web of cultural influence radiating out from that moment — the artist, the city you're standing in, and the era the music belongs to?

Not a music app. Not a map app. Not a Wikipedia reader. Something that sits at the intersection of all three, anchored by what Ammon Haggerty called "the persistent triad": Subject, Place, and Time. Three dimensions, always present, always co-equal. Change one and the other two have to respond intelligently.

Hear Sly Stone in Oakland in 2026, and the system shows you 1971. Shift the place to Berlin, and it asks: who is the Sly Stone of Berlin? Shift the year to 1930 and it hunts for whoever carried that same energy in that era.

This is the kind of idea that either dies on a whiteboard or gets built in a fever. Ammon chose the fever.

February 13 · The Concept Bible

Six Principles Before a Single Line of Swift

Before any code was written, a core concepts document laid the philosophical foundation. This wasn't a typical spec — it was a manifesto about what the app believed.

  1. 01 The LLM is a narrator, not a researcher. Feed it structured facts, ask it to narrate. Never ask the model to "know" things.
  2. 02 The graph is the product. AI generates flavour text; the knowledge graph generates insight.
  3. 03 Three dimensions, always. Never collapse below three.
  4. 04 Cheapest layer that works. Static label before template before AI.
  5. 05 Infer, don't ask. Personalization emerges from behaviour, not questionnaires.
  6. 06 Bidirectional exploration. Every entity is both a destination and a departure point.
Key Insight

Principle #4 reveals an almost adversarial relationship with AI cost. The "narrative data waterfall" specified checking Apple Music editorial notes first, then Wikipedia extracts, then computed templates, and only then calling an LLM for a single sentence. Most interactions were designed to need zero AI. The LLM was the expensive last resort, not the default.

The concept doc also staked out a hard sovereignty position: all user data stays on-device. Observations, preferences, exploration history, personal graph — nothing touches a server. Cultural exploration could be private.

February 13 · Morning

The Rendering Gamble

Here's where the first truly contrarian decision was made.

Rather than building Anaspace as a conventional SwiftUI app with cards and lists and navigation stacks, the rendering system spec called for something closer to a 1980s terminal emulator: a 33-column monospaced character grid using JetBrains Mono, rendered entirely through Core Animation's CATextLayer. The entire UI — every piece of text, every animation, every visual effect — would be composed of individual character cells on a fixed grid.

This is the kind of decision that looks either brilliant or insane, with no middle ground.

Grid Specification
FontJetBrains Mono 15.52pt
Letter Spacing11%
Line Height22.3pt
Grid33 cols × 32 rows
ColoursExactly 5
Gradients / OpacityNone
LayersStructure · Content · Transition
Layer Instances96 CATextLayer
Screen State Size~3 KB

The performance math was done upfront: 32 rows × 3 layers = 96 CATextLayer instances. During the heaviest animation moment, maybe 4–6 layers would need string mutations per frame. Each mutation under 1ms. The 60fps budget gives 16.6ms per frame. Massive headroom.

This was the moment the project committed to its identity. Everything downstream was now constrained to characters on a grid. No going back.

February 13 · 12:04 PM — Commit #1

First Pixels

The first commit landed: "Add character grid prototype with cascade animation."

A CharacterGrid UIView wrapping the 3×N CATextLayer grid with dirty-row rendering. A CascadeAnimation system doing 15ms-stagger row sweeps. A structure layer creating graph-paper texture in warm tan. Content layer showing ● READY TO OBSERVE centred — red dot, dark brown text. Background colour: dusty rose #C4AFA0.

The prototype validated the core bet. Characters on a grid, rendered through Core Animation, running at 60fps. The cascade animation — random glyphs sweeping across rows, then clearing top-to-bottom — proved that the transition layer concept worked. You could hide a content swap behind a wave of visual noise.

But the session summary also captured the honest state of things: cascade animation wired to tap but not visually verified. Bottom nav icons using placeholder Unicode. Everything uncommitted and fragile.

February 13 · Afternoon — Commit #2

The Wave System

An hour after the first commit, a far more ambitious animation spec arrived: the observe animation prototype. This was the hero interaction — what happens when you press the button and the app listens to the world.

The spec described a dual-wave interference system that reads like a physics simulation translated into typography:

← GRID EDGES → ░ ░ ▒ ▓ ▓ ╬ ╳ ◈ ╬ ▓ ▓ ▒ ░ ░ ░ ▒ ▓ █ █ ▓ ◈ ╬ ╳ ◈ ▓ █ █ ▓ ▒ ░ ░ ▒ ▓ ▓ ▒ ╳ ◈ ╬ ╳ ▒ ▓ ▓ ▒ ░ INBOUND COLLISION OUTBOUND audio-reactive interference sonar pulses 35 units/sec chaos: 0.85 17.5 units/sec

Outbound waves ("sonar"): concentric rings of glyphs emanating from the observe button. 9-unit band width, moving at ~17.5 units per second. Leading edges sparse and light, cores dense and dark, trailing edges fading.

Inbound waves (audio-reactive): originating from grid edges, moving inward toward the button, driven by audio amplitude. Quiet room = faint wisps at the edges. Loud music = dense waves pushing deep toward centre. Moving at 2× the speed of outbound.

Interference: where the two wave systems overlap, special glyphs appear from a distinct pool — geometric, angular characters that look like neither wave. Chaos cranks to 0.85. Designed to feel like a spark.

Glyph density was organised into five tiers, from barely-visible dots (· ˙ ' ,) through to full block characters (█ ▇ ▆). The effect was organic, almost biological — dense masses of characters pulsing outward with ragged, breathing edges.

February 14 · 11:16 AM

The Service Layer Reality Check

A day passed. The rendering was working. The animations were conceptually sound. Now came the hard part: making the thing actually do something.

Two documents landed ten minutes apart. The first was the service layer design doc, immediately followed by the 13-task implementation plan. This was the point where the project shifted from "impressive tech demo" to "actual application."

Centralised ServiceManager: One @Observable object owns all services, coordinates activation. Not microservices, not dependency injection frameworks — one object with clear ownership of the entire service tree. Pragmatic for a hackathon, but also a bet that complexity management through simplicity beats complexity management through abstraction.

Parallel audio pipeline: The key technical insight. A single AVAudioEngine captures audio and fans the buffer to three consumers simultaneously: ShazamKit for music recognition, Apple Sound Analysis for scene classification, and Speech Recognition for voice transcription. All three run from the first moment the button is pressed.

Architecture Decision

Claude integration was explicitly stubbed. The design doc said it plainly: "ClaudeService — Protocol + stub that returns mock culture map data. Real implementation in next phase." The intelligence layer was deliberately deferred while the sensory apparatus was built first. Ears before brain.

February 14–15 · The Deep Specs

The Interaction Model Takes Shape

Two more specs were written around this time — the observe interaction spec and the observe logic/decision spec — representing the deepest thinking in the entire project.

The interaction spec defined the gesture model with a 300ms tap/hold threshold. Under the threshold: "listen to my world" — system-controlled observation. Over the threshold: "listen to me" — walkie-talkie mode where the user provides voice input.

It designed a haptic language to communicate system state without visual feedback: 1 Hz pulse for baseline listening, 2 Hz for music detected, accelerating pulses when resolution is imminent, three quick taps for a Shazam match, a long soft buzz for timeout. The app talks to you through vibration.

The spec confronted "the quiet location problem" head-on: what happens when you tap Observe in a silent room in a culturally unremarkable location? The answer was a six-level fallback cascade — exact coordinates, neighbourhood associations, nearest culturally significant city, regional context, "today in cultural history," and finally user affinity. The design principle: no observation ever returns nothing.

The logic and decision spec went deeper, mapping every possible signal combination to an assembly path. The precedence rules were explicit: voice commands always win. If the user says "Observe Kraftwerk" while Sly Stone is playing, the subject is Kraftwerk.

February 15–16 · The Sprint

Claude Goes Live — 15 Commits in One Day

February 15th saw one commit fixing the listening flow. Then February 16th exploded:

Fifteen commits in one day. The pace is visible in the commit messages — they alternate between feature additions and bug fixes, the characteristic rhythm of building fast and discovering problems in real-time.

February 17–18 · Polish Then Complexity

The Details That Only Matter When Someone Might Use It

February 17th brought visual consistency work: bracket glyphs standardised across all pages, status bar shown, idle pulse animation on the landing page.

February 18th was a different beast — six commits tackling increasingly subtle problems. A reusable BracketButton component. Onboarding redesigned to four minimal screens with wipe transitions. Triad continuity fixes — the triad was breaking when subjects changed. A footer state machine. Context change cancellation. And the critical detail: "persist music during loads" — the audio player shouldn't stop when you navigate.

February 19 · The Reckoning

The Token Problem

Two commits reveal a shift in concerns. The first added streaming SSE and split the JSON schema for progressive Claude rendering. The second tuned location change rules and — critically — optimised token usage.

This is the point where the idealistic principle from the core concepts doc ("cheapest layer that works") met reality. Claude was now live, generating culture maps, and the token costs were noticeable.

Constraint → Better Design

The decision to stream responses progressively changed the user experience: instead of waiting for a complete culture map and rendering it all at once, the character grid could start populating entities as they arrived. The constraint (token cost) produced a better interaction pattern (progressive revelation).

February 20 · Final Day

The Bugs That Only Exist in the Real World

The last day focused on edge-case quality. The most revealing fix: "music volume dropping to ~25% after Observe." The audio session measurement mode — needed for Shazam capture — was reducing the hardware gain, so any music playing in the background would get quiet when you tapped Observe and stay quiet afterward.

This is the kind of bug that only surfaces when you use the actual app in the real world, holding your phone, with music playing from a speaker nearby. No spec anticipates it. You just have to build the thing and live with it.

The observe button UX was refined one more time, music was set to fade on context changes rather than cutting abruptly, a privacy policy was added for App Store compliance, and the README got its final revision.

· · · · · · · · ·
Retrospective

What the Specs Reveal About the Process

The .claude directory is a fossil record of how one developer and one AI coding partner think through a problem.

Concepts first, code second. The core concepts doc preceded any implementation. It established vocabulary — "the persistent triad," "analog finding," "narrative data waterfall" — that became the language of every subsequent spec and commit message. The code didn't discover the architecture; the architecture was articulated, then implemented.

Specs as conversation artefacts. These documents read like one side of an intense conversation. They answer questions that were clearly being asked: "How exactly does the grid render?" "What happens when both waves occupy the same cell?" "What if the user is in a culturally boring location?"

The rendering system was the riskiest bet and it was validated first. Before any services, any data, any AI integration — prove that characters on a grid look good and perform well. If that bet failed, nothing else mattered.

Services were built ears-first. The audio pipeline and its consumers were fully specified and implemented before Claude integration went live. The app could listen to the world before it could think about what it heard.

The last 30% took 60% of the time. The first two days built the rendering system, animation system, and service architecture. The next three days were spent on the hundred small things that make an app feel real.

Retrospective

The Blind Turns

Tap/Hold Threshold

Started at 300ms in the interaction spec, moved to 500ms in the logic spec. Someone was testing with real fingers on real glass and discovering that 300ms was too hair-trigger — natural taps were being misclassified as holds.

Shazam Capture Reliability

Needed explicit fixing on February 16th. The parallel audio pipeline is elegant in spec form, but sharing a single AVAudioEngine buffer across three consumers produced timing issues in practice.

Triad Continuity

Broke when subjects changed — fixed February 18th. "When any dimension changes, reassess the other two" is easy to describe and hard to implement without state management bugs.

Footer State Machine

February 18th. The bottom navigation went through significant rework. State machines appear when simple conditional logic has failed.

Audio Session Volume Bug

February 20th. A classic iOS audio trap — the measurement category needed for Shazam conflicts with the playback category needed for the music player. No spec-writing anticipates this.

What It All Adds Up To

Anaspace is a peculiar and ambitious thing: a cultural exploration engine rendered as a terminal emulator, driven by echolocation, built in 18 hours by a human and an AI working from specs they wrote together.

The .claude directory captures something rare — the complete intellectual trajectory from "here's what I believe this should be" through "here's exactly how it should work" to "here's what actually happened when we built it."

The gaps between the specs and the commit history are where the real development story lives: in the threshold that had to change, the reliability fix that wasn't in any spec, the volume bug that only exists because the real world has speakers and iPhones and rooms with acoustics.

The character grid rendering decision still looks either brilliant or insane.
But 9,144 lines of code and a working app later, it at least looks committed.