Tools
I Built a Maze Runner Simulation Where Teenage Sam Altman Survives the Glade December 25, 2025
2025-12-25
0 views
admin
The Concept in One Sentence ## Why This Even Exists ## Core Architecture (surprisingly clean for such a silly idea) ## Personality Really Matters (more than you'd expect) ## LLM Usage — Extremely Paranoid Edition ## Console Output Porn (because we all love pretty terminals) ## How to Run It Right Now ## Future Evil Plans (if I don't get bored) ## Final Thoughts What happens when you drop a teenage version of Sam Altman into James Dashner's The Maze Runner universe? Apparently, you get a surprisingly addictive little personality-driven simulation that combines: And yes — it is as weird and fun as it sounds. A 16-year-old Sam Altman wakes up in the Box.
His survival odds depend (mostly) on how curious, agreeable, and emotionally stable you decide he is. Everything is in one gloriously self-contained file right now (MVP life): The simulation loop is brutally simple: And then it prints a nice table + health plot at the end. Here's what happens when you run the exact same events with slightly different OCEAN profiles: The high-Openness version is dramatically more entertaining (and painful) to watch. I wanted narrative flavor without spending $12 on one funny run. Total cost for 50 full runs ≈ $0.04–0.07 (December 2025 pricing) Followed by a clean summary table and this little guy at the end: (matplotlib line chart of health going up and down like teenage mood swings) Takes ~20–40 seconds per run (mostly waiting for OpenAI). This was one of those projects that started as a dumb joke and somehow became genuinely interesting to watch unfold. The combination of fixed personality traits + randomness + tiny LLM injections creates emergent stories that feel surprisingly alive for <300 lines of code. Sometimes the silliest ideas teach you the most about agency, personality modeling, and how little prompting an LLM actually needs to be entertaining. Give it a spin.
Change the OCEAN values.
Laugh at teenage Sam getting mauled by a Griever because he was "too curious". And if you make an even worse version with Elon or Vitalik — please tag me. Merry Christmas 2025, and may your maze always have an exit. 🌀 Repo: https://github.com/ilyarah/Maze_Runner_Sam_Altman_Sim_MVP
License: MIT What weird simulation should I build next? 👇 Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse COMMAND_BLOCK:
@dataclass
class Character: name: str age: int ocean: Dict[str, float] health: float = 100.0 relationships: Dict[str, float] = None def make_decision(self, event_type: str) -> str: if event_type == "danger": return "explore" if random.random() < self.ocean["O"] else "hide" # ... etc Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
@dataclass
class Character: name: str age: int ocean: Dict[str, float] health: float = 100.0 relationships: Dict[str, float] = None def make_decision(self, event_type: str) -> str: if event_type == "danger": return "explore" if random.random() < self.ocean["O"] else "hide" # ... etc COMMAND_BLOCK:
@dataclass
class Character: name: str age: int ocean: Dict[str, float] health: float = 100.0 relationships: Dict[str, float] = None def make_decision(self, event_type: str) -> str: if event_type == "danger": return "explore" if random.random() < self.ocean["O"] else "hide" # ... etc CODE_BLOCK:
def generate_narrative(...): try: response = openai.ChatCompletion.create( model="gpt-3.5-turbo", max_tokens=85, temperature=0.7, ... ) except: return "Fallback: Sam did a thing. It was probably fine." Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
def generate_narrative(...): try: response = openai.ChatCompletion.create( model="gpt-3.5-turbo", max_tokens=85, temperature=0.7, ... ) except: return "Fallback: Sam did a thing. It was probably fine." CODE_BLOCK:
def generate_narrative(...): try: response = openai.ChatCompletion.create( model="gpt-3.5-turbo", max_tokens=85, temperature=0.7, ... ) except: return "Fallback: Sam did a thing. It was probably fine." CODE_BLOCK:
while sim.run_day(): pass Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
while sim.run_day(): pass CODE_BLOCK:
while sim.run_day(): pass CODE_BLOCK:
Day 4 | Sam is offered a chance to run the Maze with the Runners. → Decision: explore → Outcome: positive Sam sprinted into the twisting corridors, eyes wide with that terrifying mix of terror and fascination only a true puzzle addict could feel. Health: 100 | Relationships: Newt:0.68 Thomas:0.62 Chuck:0.58 Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
Day 4 | Sam is offered a chance to run the Maze with the Runners. → Decision: explore → Outcome: positive Sam sprinted into the twisting corridors, eyes wide with that terrifying mix of terror and fascination only a true puzzle addict could feel. Health: 100 | Relationships: Newt:0.68 Thomas:0.62 Chuck:0.58 CODE_BLOCK:
Day 4 | Sam is offered a chance to run the Maze with the Runners. → Decision: explore → Outcome: positive Sam sprinted into the twisting corridors, eyes wide with that terrifying mix of terror and fascination only a true puzzle addict could feel. Health: 100 | Relationships: Newt:0.68 Thomas:0.62 Chuck:0.58 COMMAND_BLOCK:
git clone https://github.com/ilyarah/Maze_Runner_Sam_Altman_Sim_MVP.git
cd Maze_Runner_Sam_Altman_Sim_MVP pip install openai matplotlib export OPENAI_API_KEY="sk-..." python simulation/main.py Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
git clone https://github.com/ilyarah/Maze_Runner_Sam_Altman_Sim_MVP.git
cd Maze_Runner_Sam_Altman_Sim_MVP pip install openai matplotlib export OPENAI_API_KEY="sk-..." python simulation/main.py COMMAND_BLOCK:
git clone https://github.com/ilyarah/Maze_Runner_Sam_Altman_Sim_MVP.git
cd Maze_Runner_Sam_Altman_Sim_MVP pip install openai matplotlib export OPENAI_API_KEY="sk-..." python simulation/main.py - Big Five (OCEAN) personality modeling
- Probabilistic decision making
- Token-efficient LLM narrative generation
- Health + relationship tracking
- Beautiful console output + matplotlib health graphs - Play with OCEAN personality traits as actual gameplay mechanics
- Use very light LLM calls for flavor text without burning money
- Create something that feels like interactive fan-fiction but runs in ~200 lines
- Have an excuse to think about teenage Sam Altman solving mazes (don't ask) - Max 85 tokens per generation
- Temperature 0.7 (chaotic but not insane)
- Perfect fallback text that still makes sense
- Only one call per day - Branching events based on current health/relationships
- Multiple endings (escape / become Keeper / get eaten by Griever)
- GUI with pygame (because why not)
- Sensitivity analysis: how much Openness is too much Openness?
- Replace Sam with other public figures (you know you want to)
how-totutorialguidedev.toaiopenaillmgptpythongitgithub