Papers & Ideas Alicia Draws From
My Alicia stands on the shoulders of researchers, designers, and frameworks that saw something the rest of us hadn't yet. This page lists the papers, concepts, and intellectual foundations that shaped the project — every one with a link to the source.
The fuller version with extended "how myalicia uses it" descriptions is in the repo at docs/PAPERS.md.
Foundational
Humorphism
Source: humorphism.com · Geordie Kaytes' write-up on Lab Leaks
The design philosophy that technology should take the shape of the human, not the other way around. Stop building chatbots everyone learns to talk to; start designing relationships that learn the person they're with. The entire three-loop architecture is humorphism made buildable.
Thin Harness, Fat Skills
Source: Garry Tan on X · GitHub
Architectural pattern for agent design: a small, opinionated
orchestration core surrounded by many composable skill modules.
Visible in the folder structure: myalicia/core/ stays
tiny while myalicia/skills/ holds ~80 modules. The
core stays out of the way so the skills can be where the
personality lives.
Autoresearch
Source: Andrej Karpathy on X · GitHub
The pattern of autonomous research loops where an agent runs reflection and synthesis on its own, without human prompting. The Notice and Know loops are myalicia's local, personal, single-user version of autoresearch — synthesis runs while you sleep, novelty detection on conversations, weekly deep passes over the full vault.
Module-Level Frameworks
Papers that name or directly shape specific skill modules.
Reflexion: Language Agents with Verbal Reinforcement Learning
Authors: Shinn et al., 2023 · ArXiv: 2303.11366
Agents critique themselves after tasks and store linguistic
feedback for future episodes. Implemented in
reflexion.py and meta_reflexion.py.
Constitutional AI: Harmlessness from AI Feedback
Authors: Bai et al., 2022 (Anthropic) · ArXiv: 2212.08073
Self-criticism and revision against a set of principles.
Implemented in constitution.py — outputs are scored
against an evolving constitution before being acted on.
Generative Agents: Interactive Simulacra of Human Behavior
Authors: Park et al., 2023 (Stanford) · ArXiv: 2304.03442
Multi-timescale reflection loops in agent architectures. The three-cadence design (Listen / Notice / Know) draws on this work's insight that agents need different reflection horizons to behave consistently over time.
ReAct: Synergizing Reasoning and Acting in Language Models
Authors: Yao et al., 2023 · ArXiv: 2210.03629
Interleaving reasoning traces and acting. The
handle_message pipeline mirrors this pattern — every
message routes through reasoning steps before any tool is called.
The tool_router is the action-selection layer.
Voyager: An Open-Ended Embodied Agent with LLMs
Authors: Wang et al., 2023 (NVIDIA) · ArXiv: 2305.16291
Self-extending agents that author new skills based on observed
gaps. Inspires skill_author.py — when My Alicia
notices a gap in her own behavior, she can draft a new skill
module to handle it.
Toolformer: Language Models Can Teach Themselves to Use Tools
Authors: Schick et al., 2023 (Meta) · ArXiv: 2302.04761
Self-supervised tool-use selection. Background influence on
tool_router.py — the function-calling dispatcher
that chooses which skill to invoke given a message and context.
Tree of Thoughts: Deliberate Problem Solving with LLMs
Authors: Yao et al., 2023 · ArXiv: 2305.10601
Tree-search over reasoning paths. Background influence on the metacognition module's confidence-driven escalation: when the Listen-loop response confidence is low, the request escalates to a deeper Sonnet/Opus pass that explores more thoroughly.
Background Influence
Ideas that shaped the project's worldview. Less direct than the module-level citations, but the influence is visible in the code.
Anthropic's Tool Use & Model Card
Source: Anthropic Tool Use docs
Function-calling primitives that
tool_router.py wraps. The whole tool-execution loop
in handle_message uses Anthropic's tool-use schema.
Intrinsic Motivation in Agents
Source: Schmidhuber's curiosity work
Curiosity-driven exploration research. The
curiosity_engine.py uses three novelty signals
derived from this tradition.
Knowledge Graphs / Personal Knowledge Management
Source: Obsidian · Andy Matuschak's notes
Vault structure assumptions and the link-as-first-class-citizen
pattern. graph_intelligence.py builds on these by
surfacing missing or weak links between notes the user has
accumulated.
The Anthropic Claude API
Source: docs.claude.com
The runtime that makes all of this possible. Three model tiers (Haiku / Sonnet / Opus) map directly to the three loops; the metacognition layer escalates between them.
Caveat
The lineage above is honest but partial. As the project grows and contributors add awareness primitives drawing on different traditions, this page will grow. If you see a connection we should add (or a citation we got wrong), open an issue.
For more on the specific implementation of each, the source code's docstrings are often the clearest reference — most skill modules cite the work that inspired them at the top.