Crystal Tubbs, AI researcher, governance architect, and founder of Metamorphic Curations LLC. Translating complex AI systems into equitable, responsible, high-performance outcomes for organizations ready to lead.
I didn't arrive at AI governance through a textbook. I arrived through the conviction that powerful systems if left unwatched will replicate the inequities already baked into our world. My research, my frameworks, and every company I've built are expressions of a single belief: technology should serve people, not extract from them.
My work sits at the intersection of behavioral AI research, governance architecture, and practical deployment. I maintain a 4.0 GPA in my MSAI program while running an active consultancy and publishing independent research, because the work demands both the rigor of scholarship and the pragmatism of someone who ships.
I coined the "Metric Illusion" demonstrating how bias can transfer silently through LLM knowledge distillation while evaluation metrics stay clean. I understand what it means when the numbers look good and the system is still broken.
Accountability must be external, continuous, and structurally enforced not self-reported by the systems being governed.
Every framework I publish is designed to be independently verified. Good science doesn't ask you to trust me it gives you the tools to check.
Equitable outcomes aren't a feature they're foundational. I design them into the architecture, not bolted on at the end.
I use technology to consciously innovate and create a more globally equitable future. Not as a tagline as the operating principle behind every system I design, every paper I publish, and every client I serve.
Crystal Tubbs, FounderWhether you're deploying agents, navigating regulation, building AI literacy, or trying to understand what your systems are actually doing I bring research depth and deployment experience to move you forward with confidence.
Strategic advisory for organizations deploying AI agents or LLMs at scale. I audit your systems against emerging frameworks, identify accountability gaps, and design governance architectures that hold up under regulatory scrutiny.
Book this serviceProduction-grade AI systems designed for your specific workflows. Fine-tuned LLMs, custom AI agents, RAG pipelines, and automation architectures built to integrate cleanly into your operations.
Book this serviceDeep technical audits of LLM behavior across demographic variables, prompt conditions, and deployment contexts going beyond standard benchmarks to surface the failures that only appear in the wild.
Book this serviceWorkshops, seminars, and written curriculum for teams and leaders who need to understand AI without becoming engineers. From introductory literacy to advanced governance concepts.
Book this serviceIndependent research, white papers, policy briefs, and thought leadership content on AI governance, safety, and ethics written to shape how decision-makers think.
Book this serviceOn-chain and off-chain analytics, crypto portfolio strategy, DeFi research, and blockchain infrastructure integration into AI or fintech operations. Built on deep research, not hype cycles.
Book this serviceWhether you're navigating AI adoption, evaluating governance risk, or exploring investment in responsible AI infrastructure choose your entry point below.
CHRYSALIS is my flagship framework an external governance layer that validates what AI agents believe before they act. It monitors cognitive pressure in real time, detects belief injection attacks, ensures regulatory compliance, and scores epistemic integrity across the full agent lifecycle.
This is not agent self-governance. This is governed accountability from the outside the architectural distinction that makes the difference between a system that claims to be safe and one that can prove it.
Visit chrysalisai.io
Introduces the "Metric Illusion" demonstrating how bias transfers silently through LLM knowledge distillation while standard evaluation metrics remain pristine.
Surfaces how LLMs exhibit systematically different evaluation behavior across demographic contexts in hiring, with direct implications for algorithmic employment discrimination.
Four-pillar governance architecture providing the theoretical foundation for external AI accountability: Entitlement Governance, Continuous Observability, Lifecycle Accountability, and Emergency Governance.
Theoretical framework examining how interaction styles and conversational patterns induce measurable behavioral instability in large language models with implications for deployment safety.
A reproducible evaluation tool for measuring LLM output fidelity across prompt variations and retrieval conditions surfacing reliability gaps in real-world RAG deployments.
MSAI capstone under Dr. Arthur Choi. A retrieval-augmented generation platform helping veterans navigate the transition to civilian careers AI in direct service of an underserved population.
The organizations I work with aren't looking for someone who will tell them what they want to hear. They're looking for a partner who will tell them the truth, show them the data, and help them build something they're proud to put their name on.
I move between a whitepaper and a deployment architecture without losing coherence at either end. I know when each one is what the moment calls for and I know the difference between a framework that survives academic review and one that survives contact with a production system.
The difference between a good AI outcome and a catastrophic one usually comes down to whether accountability was designed in from the beginning. I help organizations make that choice deliberately, before they need to.
Systems that govern themselves will govern themselves in their own interest. External accountability isn't a constraint on innovation it's what makes innovation trustworthy.
Research that can't be independently checked isn't research it's marketing. Every framework I publish is designed for verification by anyone willing to do the work.
Neutral systems are not neutral. The absence of deliberate equity design is itself a design choice one that tends to benefit whoever already holds power.
Frameworks that only hold in controlled conditions aren't ready for deployment. I test against the messy, adversarial conditions where the work actually needs to perform.
Whether you're navigating AI adoption, blockchain investments, or automating workflows I'm here to simplify the journey and turn mountains into manageable molehills.
St. Petersburg, FL · Available Globally