Hi! My name is Hongkoo Yeo, and I’m currently pursuing a Master of Architecture at Georgia Tech, after earning my Bachelor of Architecture from Korea University.
My work investigates the intersection of architecture, sound, and computation — how spatial and sonic forms can share a standard grammar of composition and perception. I explore how architectural geometry, auditory experience, and algorithmic systems can be understood as part of a single multimodal continuum.
Recent projects include:
AI-driven studies of multimodal perception, where machine learning models are trained to interpret correspondences between visual form and sound identity; and
Shape Grammars of Hanoks, which encode traditional Korean architecture as a generative system for studying cultural and perceptual continuity.
My practice draws on both architectural computation and generative music, using tools such as Grasshopper, Max/MSP, and custom algorithms to build environments in which geometry and sound inform one another. Through these projects, I seek to develop computational frameworks that unite form, sound, and cognition — what I think of as the multimodal perception of architecture.
I’m especially interested in research that situates design as a transmodal act—moving across dimensions, media, and sensory experience—and in contributing to broader conversations about sonic architectonics, algorithmic aesthetics, and total environments.
Feel free to reach out to me!