How Role-Playing Games, Distributed Computing, and AI Development Led to an Investment Pitch Simulator

I've spent close to 50 years working with role-playing games in various capacities—as a player, game master, designer, researcher, and therapeutic practitioner. In parallel, I've been writing code since 1979, building systems that range from simple pattern-matching programs on university mainframes to distributed GPU clusters processing modern AI workloads. These two paths—gaming and computing—have intersected repeatedly throughout my career in ways that weren't always obvious at the time but now seem almost inevitable in retrospect.

Original Publication Date: November 11, 2025 Author: William "Hawke" Hawkes-Robinson

How Does This Project Relate to My Career Journey?

I've spent close to 50 years working with role-playing games in various capacities—as a player, game master, designer, researcher, and therapeutic practitioner. In parallel, I've been writing code since 1979, building systems that range from simple pattern-matching programs on university mainframes to distributed GPU clusters processing modern AI workloads. These two paths—gaming and computing—have intersected repeatedly throughout my career in ways that weren't always obvious at the time but now seem almost inevitable in retrospect.

The RPEPTFS (Role-playing Enhanced Pitch Training and Feedback Simulator) project represents another convergence of these themes. It's an AI-powered training platform where entrepreneurs can practice their investment pitches with simulated investors based on real personalities from shows like Shark Tank and Dragons' Den. On the surface, this might seem like a straightforward application of large language models to role-playing scenarios. But the deeper patterns behind it—personality modeling using game mechanics, distributed AI infrastructure, and the philosophy of building self-hosted systems on limited budgets—trace back through decades of work on projects like SIIMPAF (Synthetic Intelligence Interactive Matrix Personal Adaptive Familiar), DGPUNET (Distributed GPU Network), AILCPH (Artificial Intelligence Large Context Project Helper), and even further back to experiments with IRC bots and Beowulf clusters in the 1990s.

This post attempts to trace those connections, showing how RPEPTFS didn't emerge from nowhere but rather represents another iteration of approaches I've been refining since the 1970s and 1980s. Whether these approaches will prove valuable in this new context remains to be determined, but the patterns themselves have shown resilience across different technologies and applications over the decades.

The Foundation: Role-Playing Games and Systems Thinking (1977-1990)

In 1977, a cousin introduced me to Dungeons & Dragons. I was young enough that the experience shaped how I thought about systems, interactions, and possibilities in ways I'm still discovering. Role-playing games aren't just entertainment—they're frameworks for thinking about dynamic interactions between rules, randomness, people, and human choice. The game master role particularly influenced my later work: you're essentially running a simulation in your head, modeling how different personalities (non-player characters) would react to player actions based on their motivations, capabilities, and histories.

Two years later, in 1979, another cousin gave me access to the University of Utah's computer network, part of what would eventually become the Internet. At 8 or 9 years old, I was learning to code in BASIC and writing programs that tried to capture some of what made RPGs interesting—particularly making computer interactions feel less mechanical and more responsive.

One early program was called "Insult Your Computer." It was exactly what it sounds like: you could type insults at the computer and it would respond with witty retorts. The program worked through keyword matching and probabilistic response selection—hundreds of patterns that would trigger different response categories with percentage-based randomization so it didn't feel completely scripted. This was primitive natural language processing, though we didn't call it that in 1979. More importantly, it introduced me to a concept that would recur throughout my career: creating the illusion of intelligence through pattern recognition and variation.

Around the same time, I started writing role-playing game programs where the challenge wasn't tracking stats or rolling dice (that's straightforward logic), but rather making the non-player characters feel less mechanical. Early attempts were simple random text selection, but I wanted more. I started tracking whether the player had talked to an NPC before, what they had said recently, the NPC's mood based on events, and maintaining primitive memory of previous interactions. This meant maintaining state between encounters and adjusting responses based on history—within severe hardware limitations on early PCs.

The RPG design perspective influenced how I thought about these systems fundamentally. In tabletop games, the game master remembers previous sessions, adjusts NPC reactions based on player actions, and creates continuity. I was trying to capture that in code, which required thinking carefully about what to track and what to ignore given the constraints.

From Simple Patterns to Distributed Systems (1990-2005)

By the mid-1990s, I was building IRC (Internet Relay Chat) bots that continued these themes at a larger scale. Some were entertainment bots that ran RPG-like interactions in chat channels. Others were security bots for channel management. All of them involved pattern matching, natural language processing attempts, and trying to make automated responses feel more contextually appropriate.

More importantly, this period introduced me to distributed computing through Beowulf clusters—multiple commodity computers networked together to function as a cohesive system. The economic argument was compelling then as it is now: instead of one expensive enterprise server, use multiple less-expensive commodity systems to achieve similar capabilities at a fraction of the cost. I worked with these clusters in professional contexts where reliability mattered—systems that had to run production workloads, not just experiments.

The patterns from this era would prove remarkably applicable decades later. When you're coordinating work across multiple systems, you need to think about resource allocation, task distribution, handling failures gracefully, and monitoring the overall system health. These principles don't change much whether you're distributing CPU workloads in the 1990s or GPU workloads in the 2020s—the underlying technology changes but the coordination challenges remain similar.

The Long Path to SIIMPAF: Building AI Assistants Since the 1990s

I've been attempting to build an AI assistant since the 1990s, though the capabilities have evolved substantially over time. Early versions in Java combined automatic speech recognition (ASR) with natural language processing (NLP), trying to create something that could respond helpfully to voice commands. Around 2009, I had a primitive interactive voice system working, though "primitive" is being generous—it could recognize some commands and respond with synthesized speech, but the accuracy and contextual understanding were limited by the technology available at the time.

The project went through various names and incarnations over the years. By the 2020s, it had evolved into what I started calling AILCPH (Artificial Intelligence Large Context Project Helper) and eventually SIIMPAF (Synthetic Intelligence Interactive Matrix Personal Adaptive Familiar). The name "familiar" comes from the RPG concept—a companion that assists and adapts to the character it serves. That seemed appropriate for what I was trying to build: a self-hosted AI system that could help with document processing, answer questions about large codebases, provide voice interaction, and maintain context across sessions.

SIIMPAF isn't a single monolithic application—it's an integration of dozens of open-source components cobbled together over years. Document processing pipelines, vector databases (Qdrant), local AI models (Ollama, later vLLM), voice processing systems, and even animated avatar interfaces. The philosophy throughout has been to use existing open-source tools wherever possible rather than reinventing functionality that already exists, writing custom code only for the integration layer that ties everything together.

This approach comes partly from experience and partly from necessity. In 2021-2022, while working with LearningMate on educational technology, I integrated Vosk (speech recognition), Jitsi (video conferencing), and custom NLP pipelines to create real-time multilingual closed captioning for K-12 environments. The resulting system performed 150% faster and 30% more accurately than Google's commercial offering, demonstrating that properly integrated open-source components can outperform commercial solutions when optimized for specific use cases. But that work required a $250,000 per month R&D budget just for that project. I learned that most optimization could happen locally on limited hardware before pushing to cloud infrastructure for scale testing, which informed how I approached SIIMPAF's development on a much smaller budget.

DGPUNET: When GPU Scarcity Forced Innovation (2020-2025)

Around 2020-2025, AI development accelerated rapidly. Machine learning models, particularly large language models and generative AI, were demonstrating capabilities that seemed years away just a short time before. But this acceleration created a problem: GPU scarcity.

Training and running these models requires significant GPU resources. NVIDIA data center cards like the A100 and H100 were (and remain) extremely expensive—$8,000 to $100,000+ per card—with waiting lists measured in months. Cloud computing providers raised prices regularly, sometimes 20-30% at a time. The dynamics uncomfortably reminded me of the 1970s mainframe era: computational power centralizing in the hands of a few large organizations that could afford the resources, with everyone else dependent on them for access.

This wasn't acceptable to me, partly on philosophical grounds (computational independence matters) and partly on practical grounds (I couldn't afford those prices for the work I was doing). The patterns I'd learned decades earlier with Beowulf clusters suggested an alternative: distributed GPU computing using consumer hardware.

DGPUNET (Distributed GPU Network) applies principles similar to 1990s Beowulf clusters to modern GPU computing. Instead of one expensive enterprise GPU, use multiple consumer GPUs distributed across several machines. The economics are compelling: An H100 GPU costs around $30,000-40,000 with limited availability, and AWS H100 instances run about $40,000 per month. An H200 costs $100,000+ for hardware or $46,000 per month on AWS.

Consumer GPUs tell a different story. An RTX 4090 with 16GB VRAM costs around $1,600. An RTX 4080 with 12GB runs about $1,200. An RTX 3090 with 24GB can be found for around $1,000. The newer RTX 5090 with 32GB VRAM costs between $1,500-$2,600 depending on manufacturer. Even factoring in the systems to house these GPUs—laptops or custom-built towers with appropriate processors and RAM—the total cost for multiple consumer GPUs comes to less than $10,000 total, and they're generally available.

My current DGPUNET setup consists of five machines: An Alienware M18R2 with RTX 4090 (16GB VRAM) as the head node, an Alienware M18R1 with RTX 4080 (12GB VRAM), a Dell XPS 8950 tower with RTX 3090 (24GB VRAM), an Alienware M16 with RTX 4070 (8GB VRAM), and a custom-built tower with RTX 5090 (32GB VRAM). Across these systems, I have 92GB total VRAM, 448GB system RAM, and over 100 logical CPU cores. The hardware cost totaled less than $10,000 USD versus AWS pricing that would be $40,000-46,000 per month for comparable (though not identical) enterprise GPU access.

The technical challenges were substantial. Coordinating heterogeneous hardware—different GPU architectures, VRAM amounts, and capabilities—requires intelligent task allocation. Network bandwidth over standard Gigabit Ethernet (later upgraded to 10 Gigabit) is slower than the specialized high-bandwidth interconnects in data centers, which means workload distribution must minimize data transfer between GPUs. The Ray framework helps with distributed computing coordination, but proper configuration and tuning took many iterations.

Building DGPUNET enabled capabilities in SIIMPAF that wouldn't be practical otherwise. The animation pipeline, for example, can run Stable Diffusion image generation on one GPU while pose detection processes on another and final animation rendering happens on a third. Voice processing benefits similarly—ASR can run on one node while text-to-speech prepares on another, reducing latency in interactive conversations.

Personality Modeling: From RPG Stats to AI Investor Agents

Throughout this four-decade journey, one pattern keeps recurring: trying to make computer-controlled entities feel more realistic and responsive through personality modeling. Those early attempts in the 1980s to make RPG NPCs feel less mechanical by tracking mood and history were crude but conceptually similar to what I'm doing now with far more sophisticated tools.

Role-playing games have used numerical personality systems for decades. Different games use different models—some focus on alignment (good/evil, lawful/chaotic), others on detailed trait lists, still others on psychological models. In the 1980s, I was using RoleMaster's stat system (Reasoning, Intuition, Presence, Self-Discipline, Empathy, Quickness, Memory) to model NPC behavior in my programs.

Modern psychology offers more rigorous frameworks, particularly the Big Five personality model (Openness, Conscientiousness, Extraversion, Agreeableness, Neuroticism), which has substantial research backing. When I started working on SIIMPAF's avatar system, I implemented personality modeling using Big Five scores to influence how the AI responds—more open personalities would explore novel topics more readily, higher conscientiousness would lead to more structured and thorough responses, extraversion would affect verbosity and enthusiasm, and so on.

This same approach carries forward into RPEPTFS, but with a specific application: modeling real investor personalities. Each AI investor agent in RPEPTFS has a comprehensive personality profile that combines:

  • Big Five (OCEAN): Five psychological trait dimensions scored 1-100
  • Any Preferred Role-Playing Game System, for Example RoleMaster Stats, if wanted: Seven cognitive/behavioral attributes scored 1-100 (bringing that 1980s RPG system forward)
  • Investment Modifiers: Four investment behavior parameters (Skepticism Level, Risk Tolerance, Equity Greed, Control Need) scored 1-100

For example, Mark Cuban's profile might show high Openness (interested in innovative technology), high Extraversion (energetic and direct communication style), moderate-to-high Conscientiousness (does due diligence but decides quickly), moderate Agreeableness (tough negotiator but fair), and low Neuroticism (confident decision-making). His RoleMaster stats would emphasize high Reasoning and Quickness (fast analytical thinking), high Presence (commanding personality), and moderate-to-high Intuition (pattern recognition in business opportunities). The Investment Modifiers would reflect his known preferences: moderate Skepticism (questions but gives benefit of doubt to good ideas), high Risk Tolerance (willing to invest in early-stage companies), moderate Equity Greed (reasonable deal terms), and low Control Need (hands-off investor when he trusts the team).

These numerical profiles feed into the language model's system prompt and influence response generation. They're not deterministic—there's randomness and contextual variation—but they create consistent behavioral patterns over multiple interactions. This is the same fundamental approach I was using with NPCs in the 1980s, just with vastly more sophisticated underlying AI technology and based on more rigorous psychological models. This can be performed with any investment target (or any role-playing training scenario targeted NPC) with sufficient publicly available information to draw from.

RPEPTFS: Bringing the Patterns Together

RPEPTFS emerged from recognizing that the pitch training problem has structural similarities to other interactive AI systems I've built. You need:

1. Personality-consistent agents that behave realistically over extended interactions 2. Context maintenance across multi-turn conversations 3. Distributed architecture to run multiple AI agents efficiently 4. Self-hosted infrastructure to maintain privacy and control costs 5. Integration of existing tools rather than building everything from scratch

The technical architecture reflects lessons from decades of work:

  • Java 21 + Spring Boot for the business logic layer (I've been using Java for enterprise systems since the late 1990s because of its strong object-oriented design principles)
  • Python 3.11 + FastAPI for the AI agent layer (Python has become the lingua franca of AI/ML work, and FastAPI provides good performance)
  • vLLM for efficient LLM inference (using local models, not cloud APIs)
  • A number of different models, for example LLaMA 3.1/3.2 models (70B or 405B parameters, running self-hosted)
  • PostgreSQL 15 + Redis 7 for data persistence and caching
  • DGPUNET infrastructure to distribute model inference across multiple GPUs

Each investor agent runs as a separate FastAPI service on a dedicated port, with its own vLLM engine instance allocated specific GPU resources. The Java application coordinates sessions, tracks interest levels, manages turn-taking in panel modes, and generates feedback. The architecture is deliberately modular—following object-oriented analysis and design (OOAD) principles—because experience has taught me that clean separation of concerns makes systems far more maintainable as they evolve.

The system currently includes six investor NPCs (Mark Cuban, Kevin O'Leary, Barbara Corcoran, Daymond John, Lori Greiner, Robert Herjavec) with plans to expand to 30+ personalities from various investment shows, and then to others as data availability and need allows. Each has a comprehensive JSON configuration file that includes biographical information, personality scores, investment preferences, behavioral patterns, historical performance, and system prompt templates that prime the language model to respond consistently with that personality.

What Makes This Approach Different (and What Doesn't)

RPEPTFS isn't revolutionary—it's applying existing patterns to a specific use case. What differentiates it from other AI training simulators (to the extent that's meaningful) comes from the philosophical approach rather than novel technology:

Self-Hosted First: The entire system runs on local hardware. No dependency on cloud APIs means no per-use costs, no usage limits, complete privacy for proprietary business information in pitches, and no risk that an AI provider's policy changes break the application. This approach requires more upfront infrastructure investment but provides long-term independence. Open Source Foundation: Every component uses open-source software—Java, Python, vLLM, LLaMA, PostgreSQL, Redis, FastAPI, Spring Boot. This means no licensing costs, freedom to modify anything that doesn't work correctly, ability to inspect how everything functions, and no vendor lock-in. It also means I can contribute improvements back to these projects. Personality-First Design: Rather than generic chatbots, the system models specific personalities using rigorous frameworks (Big Five psychology + RPG-inspired attributes + etc.). This creates more realistic training scenarios where different investors react differently to the same pitch based on their documented preferences and personalities. Distributed Architecture for Cost Efficiency: Using DGPUNET infrastructure means running large language models (70B+ parameters) on consumer hardware that costs less than one month of enterprise GPU access on AWS. This makes the system economically sustainable on limited budgets. OOAD-Compliant Implementation: The Java domain model uses proper object-oriented design—21 classes (so far) with clear separation of concerns, builder patterns, comprehensive validation, and extensive documentation. This isn't exciting or innovative, but it makes the codebase maintainable and extensible over time, which matters for systems that need to evolve.

What's not different: The underlying AI technology is standard (vLLM + LLaMA models), the web framework is standard (Spring Boot), the database choices are standard (PostgreSQL + Redis), and the deployment approach is standard (containerization with Docker, orchestration planning). The innovation, such as it is, comes from integration and application rather than inventing new technologies.

Limitations, Trade-offs, and Honest Assessment

RPEPTFS is an early-stage R&D project, not a production system. Several limitations are worth stating clearly:

Realism Constraints: AI investor agents are simulations based on public information about these personalities. They will never perfectly replicate how real investors would respond because we don't have access to their internal decision-making processes, and language models have inherent limitations in reasoning and consistency. Resource Requirements: Running this system requires significant hardware investment ($10,000+ for DGPUNET infrastructure) and technical expertise to set up and maintain. This isn't a solution for everyone—it's appropriate for situations where you need complete control, have privacy requirements, expect high usage over time, or want to avoid ongoing cloud costs. Setup Complexity: Getting DGPUNET + AILCPH + SIIMPAF + RPEPTFS fully configured and working correctly required substantial time and technical knowledge. Commercial cloud-based solutions may work faster initially, though they come with ongoing costs and privacy trade-offs. Model Limitations: Even 70B-405B parameter models have limitations. They can generate plausible-sounding responses that are factually incorrect, sometimes lose context in very long conversations, struggle with complex numerical reasoning, and may not fully capture the nuance of expert investor evaluation. Scalability Trade-offs: This system scales to maybe 5-10 concurrent users per DGPUNET cluster depending on model size and hardware allocation. If you need to serve hundreds or thousands of concurrent users, different architectural approaches (possibly cloud-based) become more appropriate.

Whether RPEPTFS proves useful for actual pitch training remains to be determined. The approach is sound based on decades of experience with similar systems, but every new application has unique challenges. The system needs extensive testing with real users practicing real pitches to understand whether the AI investor responses are realistic enough to provide valuable training and whether the feedback mechanisms actually help people improve their pitches.

The Broader Pattern: Four Decades of Similar Problems

Looking back across this timeline—from 1977 to 2025—several consistent patterns emerge:

Distributed over Centralized: From Beowulf clusters in the 1990s to DGPUNET in the 2020s, I've repeatedly chosen distributed commodity hardware over centralized enterprise systems for both economic and philosophical reasons. Open Source over Proprietary: Using tools I can inspect, modify, and understand has consistently led to better long-term outcomes than depending on proprietary black boxes. Integration over Invention: Most valuable systems come from intelligently connecting existing components rather than building everything from scratch. Personality Modeling for Realism: Making computer-controlled entities feel more realistic through personality frameworks has been a recurring theme from 1980s NPC behaviors through modern AI avatars to investor agents. Self-Hosting for Independence: Owning infrastructure means no ongoing costs, complete privacy, and freedom to use systems however needed without artificial restrictions. OOAD Principles for Maintainability: Proper object-oriented design takes more upfront time but creates systems that can evolve over years rather than needing rewrites.

RPEPTFS represents another application of these patterns to a new domain. The underlying principles haven't changed much—they've proven resilient across different technologies over decades. What has changed is the sophistication of the tools available for implementing these patterns.

Looking Forward: Where This Might Lead

RPEPTFS is currently in early alpha development. The immediate work focuses on completing the web interface, implementing panel mode (multiple investors simultaneously), developing the feedback and analytics system, and extensive testing with actual users practicing pitches.

Longer-term possibilities (assuming the current system works as intended, which remains to be proven) might include:

  • Expanded investor roster: Adding 24 more investor personalities to reach 30+ total NPCs
  • Speech integration: Adding voice input/output using the ASR and TTS systems from SIIMPAF
  • Video avatars: Integrating SIIMPAF's animation pipeline for photorealistic investor avatars
  • Adaptive difficulty: Having the system adjust investor toughness based on practitioner skill level
  • Historical analysis: Tracking improvement over multiple sessions with detailed analytics
  • Custom investors: Allowing users to create investor profiles for specific contexts (e.g., angel investors in particular industries)

But these are possibilities dependent on the core system proving valuable first. I've learned over decades, in multiple fields including tech and as a Washington State Department of Health Registered Recreational Therapist and researcher, that it's better to build working systems incrementally and validate assumptions with evidence-in-practice results.

The Personal Journey Continues

RPEPTFS represents another convergence of themes that have run through my work since 1977: role-playing games, personality modeling, distributed computing, open-source integration, and building systems that try to make computer interactions feel more human and contextually appropriate.

I don't know whether this particular project will succeed. I don't know whether AI-powered pitch training provides enough value to justify the technical complexity. I don't know whether the personality modeling approach will create realistic enough investor behaviors to constitute valuable practice.

What I do know is that the underlying patterns and principles have proven useful across many different contexts over four decades. Distributed commodity hardware has repeatedly outperformed centralized enterprise systems for my use cases. Open-source integration has consistently delivered better long-term value than proprietary solutions. Proper planning and object-oriented design have made systems maintainable over years. And personality modeling—from simple RPG stats in the 1980s to Big Five psychology in the 2020s—has consistently improved the realism of computer-controlled entities.

Whether these patterns apply successfully to investment pitch training is an empirical question that will be answered through testing and real-world use. That's where the project stands now—built on a foundation of decades of experience, attempting to solve a specific problem, and waiting to see whether the approach proves valuable in practice.

About RPEPTFS: The Role-playing Enhanced Pitch Training and Feedback Simulator is an early-stage R&D project using AI-powered investor agents for pitch practice. The system uses DGPUNET distributed GPU infrastructure and runs entirely on self-hosted hardware using open-source components. Current status: Alpha development. Not production-ready. Technical Details: Java 21 + Spring Boot 3.2, Python 3.11 + FastAPI + vLLM, LLaMA 3.1/3.2 models (70B-405B parameters), PostgreSQL 15 + Redis 7, deployed on DGPUNET (92GB VRAM across 5 systems). Repository: https://git.dev2dev.net/hawke/rpeptfs Related Projects:

Contact: hawkenterprising@gmail.com Website: https://www.hawkerobinson.com Technical Blog: https://techtalkhawke.com Version: 2025.11.11-0800 Word Count: ~4,800 words Status: Draft for Review