When I arrived at MIT to attend the Supply Chain Management Master’s Program, I wanted to learn how supply chains are built from the ground up, and better understand the logic beneath the decisions I had been executing. At Apple, I tracked inventory for product launches. I knew the rhythms of a well-tuned supply chain: safety stock, sell-through rates, new-product launch timelines. I was good at it. But being good at executing within a system is different from understanding how that system should be designed in the first place. I didn’t expect that the sharpest lessons I learned would come from domains where operational discipline and frontier technology have no choice but to figure each other out.
Every Node Has Its Own Agenda
The first of these lessons came not from my supply chain coursework, but from teaching AI agents how to fail gracefully at the AI Venture Studio at the MIT Media Lab. I’ve been building multi-agent systems with specialized AI models that coordinate to accomplish tasks no single model handles well. The design problem isn’t purely technical. Each agent operates with partial information and its own failure modes; if one agent fails, it can cause others to fail in sequence, so resilience must be built into the architecture before the first line of code is written.
I recognized the same structure from my supply chain work. A global supplier network is a multi-agent system in everything but name: each node operates with its own objectives, constraints, and blind spots, and alignment between those nodes isn’t a default state you configure once but something that must be architected from the start. The difference is that in supply chain, a misaligned node delays a shipment — a visible, traceable failure. In a multi-agent system, it can corrupt the entire output silently, with no early warning signal of the kind a delayed shipment provides. Working on multi-agent systems showed me what my previous supply chain work was hiding: alignment is never a given; it has to be designed into the architecture from the start.
Growth Is Not a System
That instinct to build structure before scaling got its next test at the Martin Trust Center, where I’ve been using AI tools to compress the iteration cycle for early ventures: faster prototyping, faster assumption testing, faster validation with less capital burned. The tooling has gotten good enough that a small team can now do in days what used to require months of customer discovery and a real-world pilot that burns through funding.
But the most useful thing I brought into that room wasn’t an AI tool. It was a supply chain instinct about the relationship between velocity and structure. Startups are taught to prioritize growth, and AI makes it easier than ever to move fast, but moving fast without well-structured operations in place leads to problems that compound quietly. The per-unit economics that look fine at 50 customers break at 5,000. Cash cycles that were invisible at low volume become fatal at scale. These aren’t edge cases; they’re the default trajectory for ventures that treat operational structure as something you bolt on after product-market fit rather than something you design for from the first transaction.
What AI tools enable, when used well, is the ability to stress-test those assumptions — about unit economics, cash cycles, and operational structure — earlier and more cheaply than before. That’s the iteration that matters: not just how fast you can build, but how early you can find out where the system breaks.
That question, about where structure enables and where it suffocates, took on a different dimension when I joined the team organizing the MIT Global Startup Workshop (GSW) 2026 in Daegu, South Korea. Now in its 28th year, MIT GSW is a student-run conference that convenes in a different startup ecosystem each year, bringing together founders, investors, academics, and government stakeholders for two days of programming built around one question: How does entrepreneurship take root in a specific place and culture? Hosting it alongside Kyungpook National University means working at the intersection of two innovation cultures that approach venture building differently. It’s a reminder that the operational infrastructure a startup ecosystem takes for granted — logistics networks, financing mechanisms, regulatory clarity — is itself a supply chain problem that must be solved before venture building can scale.
The Problem That Doesn’t Fit Anywhere
The most disorienting project I’ve worked on this year starts with a question most engineers would dismiss in the first five minutes: Why are we building elaborate cooling systems on land when placing a data center underwater offers a vastly more efficient thermal sink?
Working with Prof. Luis Perez-Breva in iTeams, an MIT program that helps deep tech projects find their path from research to real-world impact, I’ve been exploring the feasibility of underwater data centers. The underlying problem is concrete: AI data centers consume enormous amounts of power, and a significant fraction of that power becomes heat that must be removed. Water is a vastly more efficient thermal sink than air. When done correctly, underwater facilities could reduce energy consumption by orders of magnitude. Microsoft and Google have run experiments, but neither has committed to deploying such a solution. The concept hasn’t scaled.
Not because the physics are wrong, but because the problem doesn’t fit anywhere institutionally. It’s too capital-intensive and cross-disciplinary for academic research cycles, and too speculative for corporate quarterly timelines. It lives in the gap between theoretically feasible and operationally real, which is exactly where iTeams operates.
Our job isn’t to solve the full engineering problem. It’s to identify existing technologies — likely from adjacent industries — that could make a first testable version viable, and to define precisely what scaling would require rather than defaulting to the assumption that it will never work. Structurally, it’s the same question supply chain practitioners face when standing up a new network from scratch: How do you design a system that allows an idea to move from concept to deployment, without building in the failure modes that will eventually kill it? That question reframed something I thought I already understood: supply chain expertise doesn’t just live in the optimization; it lives in the gap between what’s feasible and what’s operational.
What Connects All of This
Three different projects, and one lesson that kept arriving uninvited: The hardest part of any system isn’t building it; it’s designing it to hold when your starting assumptions turn out to be wrong. In multi-agent AI, that means anticipating failure modes before they cascade. In early ventures, it means stress-testing unit economics before scale exposes their shortcomings. In deep-sea infrastructure, it means defining what “testable” even looks like before committing capital to a concept that has no institutional home.
The through-line across all three wasn’t a particular background. It was a willingness to be seriously wrong about something, early enough to rethink a design approach before committing to implementation. The MIT SCM program gave me the tools to work rigorously on problems that resist easy categorization. The broader MIT ecosystem gave me the room to test those tools on problems that haven’t found their institutional home yet.
Rui Yang Teo is a graduate candidate in MIT’s Master of Applied Science in Supply Chain Management (MASc-SCM). He spent several years at Apple Inc. managing channel operations and inventory strategy across the APAC region, where he developed a precise appreciation for what well-designed systems can do and what they can’t. He holds an MS in Analytics from Georgia Tech. He is interested in the places where operational discipline and frontier technology have no choice but to figure each other out.


































