I specialize in scaling B2B SaaS and platform systems across global, multi-service environments. I lead the product side of core infrastructure — governing data integrations, system logic, and AI-ready architecture across engineering and business teams.
My work focuses on API-driven systems, event-informed workflows, and cross-system state transitions, with familiarity across MuleSoft, Snowflake, Salesforce, and Kafka.
AI-aligned product design, in this context, means building systems structured for automation, intelligent analytics, and responsible scale.
Beyond enterprise delivery, my independent work examines how complex systems preserve truth, coherence, and decision integrity under uncertainty — and how AI can be shaped to do the same.
This GitHub is structured as a map of my work — beginning with lightweight AI interaction prototypes and expanding outward into systems design, event governance, platform tooling, and architectural frameworks.
If you’re exploring my work for the first time, the AI & ML UX Systems Series is the most accessible entry point — they’re lightweight interaction prototypes that show my model intuition, product thinking, and AI reasoning.
UX-first AI prototypes that demonstrate model behavior, retrieval reasoning, explainability, and variation.
Main Hub:
https://github.com/rtfenter/AI-ML-UX-Systems-Series
Prototypes
- Minimal RAG Query Explorer (coming)
- Chat Model Behavior Sandbox
https://github.com/rtfenter/Chat-Model-Behavior-Sandbox - Model Explainer Playground (XAI Lite) (coming)
- Prompt–Response Variation Explorer
https://github.com/rtfenter/Prompt-Response-Variation-Explorer - Embeddings Visual Map
https://github.com/rtfenter/Embeddings-Visual-Map
Focus
Accessible demonstrations of retrieval, prompting, embeddings, explainability, and model variability.
Publicly observable system-level problems in modern AI agents, analyzed from a product architecture perspective.
Main Hub:
https://github.com/rtfenter/Product-Architecture-Case-Studies
Case Studies
- AI Agent Meaning Drift — Case Study
https://github.com/rtfenter/AI-Agent-Meaning-Drift-Case-Study - Where AI agents accumulate state backpressure — a bounded-memory and meaning-alignment model (coming)
- Preventing retrieval hallucination under context drift — a contract-first approach (coming)
- Why enterprise copilots misinterpret business logic — missing contracts between domain models and LLM reasoning (coming)
- How lineage breaks meaning — an event-governance layer for shared truths (coming)
- Cross-team semantic consistency for design tokens across markets (coming)
- Incident meaning drift — how organizations lose shared context in message-based workflows (coming)
Focus
Interpretation drift, retrieval degradation, lineage disconnects, meaning fragmentation, and organizational context loss — using only public behavior and visible system patterns.
Applied loyalty architecture focused on FX, tiering, reconciliation, partner rules, and value drift.
Main Hub:
https://github.com/rtfenter/Loyalty-Systems-Series
Prototypes
-
Loyalty Points Simulator
https://github.com/rtfenter/Loyalty-Points-Simulator -
Loyalty Drift Dashboard
https://github.com/rtfenter/Loyalty-Drift-Dashboard -
Loyalty Event Contract Validator
https://github.com/rtfenter/Loyalty-Event-Contract-Validator -
Tier Progression Visualizer
https://github.com/rtfenter/Loyalty-Tier-Progression-Visualizer -
Partner Rule Tester (Loyalty Edition)
https://github.com/rtfenter/Loyalty-Partner-Rule-Tester -
Redemption Value Integrity Checker
https://github.com/rtfenter/Loyalty-Redemption-Value-Checker -
FX Drift Analyzer for Loyalty Value
https://github.com/rtfenter/Loyalty-FX-Drift-Analyzer -
Loyalty Ledger Reconciliation Sandbox
https://github.com/rtfenter/Loyalty-Ledger-Reconciliation-Sandbox
Focus
Loyalty mechanics as distributed systems: event quality, FX normalization, reconciliation, and cross-market value alignment.
Tools and models for event governance, schema evolution, routing correctness, and cross-service meaning alignment.
Main Hub:
https://github.com/rtfenter/Systems-of-Trust-Series
Prototypes
- Event Quality Scanner
https://github.com/rtfenter/Event-Quality-Scanner - Event Consistency Checker
https://github.com/rtfenter/Event-Consistency-Checker - Truth Drift Map
https://github.com/rtfenter/Truth-Drift-Map - Cross-Service Meaning Comparator
https://github.com/rtfenter/Cross-Service-Meaning-Comparator - Schema Evolution Impact Analyzer
https://github.com/rtfenter/Schema-Evolution-Impact-Analyzer - Event Routing Contract Checker
https://github.com/rtfenter/Event-Routing-Contract-Checker - Ownership Boundary Validator
https://github.com/rtfenter/Ownership-Boundary-Validator
Focus
Event contracts, schema drift, routing rules, dependency impacts, and semantic alignment across distributed systems.
System-level views of how agents, models, and decision engines interpret signals and drift over time.
Main Hub:
https://github.com/rtfenter/Applied-Intelligence-Systems-Series
Prototypes
- Agent Behavior Sandbox
https://github.com/rtfenter/Agent-Behavior-Sandbox - ML Input Drift Playground (coming)
- LLM Governance Visualizer (coming)
- Model Decision Graph Viewer (coming)
- Agent State Transition Map
https://github.com/rtfenter/Agent-State-Transition-Map - AI Failure Mode Explorer
https://github.com/rtfenter/AI-Failure-Mode-Explorer
Focus
Interpretation layers, rule application, drift, alignment boundaries, and failure modes in modern AI systems.
Internal platform tools for integration pipelines, rule engines, debugging workflows, and data ownership mapping.
Main Hub:
https://github.com/rtfenter/Platform-Systems-Internal-Tooling-Series
Prototypes
- Admin Rule Engine Playground
https://github.com/rtfenter/Admin-Rule-Engine - Integration Pipeline Visualizer (coming)
- Debug / Replay Sandbox (coming)
- Invariants Visualizer (coming)
- Contract Evolution Timeline (coming)
- Incident Meaning Graph (coming)
- Data Ownership Mapper (coming)
Focus
Enterprise platform logic: ETL flows, routing, invariants, contract changes, incident interpretation, and field ownership.
Recursive Identity Architecture (RIA) is my original, independently developed framework for understanding how complex systems—human or machine—preserve coherence, meaning, and identity under contradiction and uncertainty.
Main Hub:
https://github.com/rtfenter/RIA-Research-Notes
RIA explores:
- meaning and authorship schemas
- recursion and paradox stability
- signal compression and expansion
- identity integrity under uncertainty
LinkedIn: https://www.linkedin.com/in/rtfenter/
Medium: https://medium.com/@rtfenter
GitHub: https://github.com/rtfenter