AI Governance · Research · Strategy

The future of AI
must be accountable.
I build that future.

Crystal Tubbs, AI researcher, governance architect, and founder of Metamorphic Curations LLC. Translating complex AI systems into equitable, responsible, high-performance outcomes for organizations ready to lead.

Metamorphic Curations LLC
AI GovernanceEpistemic IntegrityAgentic AI SafetyLLM Bias ResearchBlockchain × AIPolicy & EthicsDigital TransformationEquitable TechnologyAI GovernanceEpistemic IntegrityAgentic AI SafetyLLM Bias ResearchBlockchain × AIPolicy & EthicsDigital TransformationEquitable Technology

Built from
the inside out.

I didn't arrive at AI governance through a textbook. I arrived through the conviction that powerful systems if left unwatched will replicate the inequities already baked into our world. My research, my frameworks, and every company I've built are expressions of a single belief: technology should serve people, not extract from them.

My work sits at the intersection of behavioral AI research, governance architecture, and practical deployment. I maintain a 4.0 GPA in my MSAI program while running an active consultancy and publishing independent research, because the work demands both the rigor of scholarship and the pragmatism of someone who ships.

I coined the "Metric Illusion" demonstrating how bias can transfer silently through LLM knowledge distillation while evaluation metrics stay clean. I understand what it means when the numbers look good and the system is still broken.

⚖️

Governance First

Accountability must be external, continuous, and structurally enforced not self-reported by the systems being governed.

🔬

Rigorously Reproducible

Every framework I publish is designed to be independently verified. Good science doesn't ask you to trust me it gives you the tools to check.

🌍

Equity as Infrastructure

Equitable outcomes aren't a feature they're foundational. I design them into the architecture, not bolted on at the end.

MSAI Kennesaw State University
Expected Jun 2026
Metamorphic Curations LLC
Founder · AI Transformation
CHRYSALIS Framework
Epistemic Governance · Agentic AI
CIPHER Metric Illusion
Covert Bias · Knowledge Distillation
PRISM Research
LLM Bias in Hiring Systems
LLM Training Contributor
Major AI Labs · Ongoing
KSU Computing Showcase
Research Presenter · 2024–2025

I use technology to consciously innovate and create a more globally equitable future. Not as a tagline as the operating principle behind every system I design, every paper I publish, and every client I serve.

Crystal Tubbs, Founder

Services built for
this moment in AI.

Whether you're deploying agents, navigating regulation, building AI literacy, or trying to understand what your systems are actually doing I bring research depth and deployment experience to move you forward with confidence.

🤖

Custom AI & Automation Implementation

Production-grade AI systems designed for your specific workflows. Fine-tuned LLMs, custom AI agents, RAG pipelines, and automation architectures built to integrate cleanly into your operations.

LLMsRAGAgentsIntegration
Book this service
📊

AI Bias Auditing & LLM Evaluation

Deep technical audits of LLM behavior across demographic variables, prompt conditions, and deployment contexts going beyond standard benchmarks to surface the failures that only appear in the wild.

BiasEvaluationSafetyFairness
Book this service
🎓

AI Education & Executive Training

Workshops, seminars, and written curriculum for teams and leaders who need to understand AI without becoming engineers. From introductory literacy to advanced governance concepts.

TrainingWorkshopsCurriculumLeadership
Book this service
✍️

Research, Writing & Policy Advisory

Independent research, white papers, policy briefs, and thought leadership content on AI governance, safety, and ethics written to shape how decision-makers think.

ResearchWhitepapersPolicyContent
Book this service
⛓️

Blockchain & Web3 Strategy

On-chain and off-chain analytics, crypto portfolio strategy, DeFi research, and blockchain infrastructure integration into AI or fintech operations. Built on deep research, not hype cycles.

BlockchainWeb3DeFiAnalytics
Book this service

Ready to move
mountains?

Whether you're navigating AI adoption, evaluating governance risk, or exploring investment in responsible AI infrastructure choose your entry point below.

Flagship Product · Live Demo Available

Epistemic governance
for agentic AI.

CHRYSALIS is my flagship framework an external governance layer that validates what AI agents believe before they act. It monitors cognitive pressure in real time, detects belief injection attacks, ensures regulatory compliance, and scores epistemic integrity across the full agent lifecycle.

This is not agent self-governance. This is governed accountability from the outside the architectural distinction that makes the difference between a system that claims to be safe and one that can prove it.

Visit chrysalisai.io
CHRYSALIS
MEMOIR
Belief classification, conflict detection, on-chain attestation
ORACLE
Metacognitive learning and pattern analysis
MIRROR
Real-time Cognitive Pressure Index monitoring
COMPASS
Regulatory compliance reporting
SHIELD
Adversarial belief injection detection
EIS
Epistemic Integrity Score across agent lifecycle

Work that
moves the field.

CIPHER

Covert Bias Transfer via Knowledge Distillation

Introduces the "Metric Illusion" demonstrating how bias transfers silently through LLM knowledge distillation while standard evaluation metrics remain pristine.

PRISM

Context-Dependent Bias in LLM Resume Screening

Surfaces how LLMs exhibit systematically different evaluation behavior across demographic contexts in hiring, with direct implications for algorithmic employment discrimination.

SAF / CHRYSALIS

Surrogate Accountability Framework

Four-pillar governance architecture providing the theoretical foundation for external AI accountability: Entitlement Governance, Continuous Observability, Lifecycle Accountability, and Emergency Governance.

UIBF

User-Induced Behavioral Fields

Theoretical framework examining how interaction styles and conversational patterns induce measurable behavioral instability in large language models with implications for deployment safety.

CRAFT

Contextual Rewriting and Fidelity Tester

A reproducible evaluation tool for measuring LLM output fidelity across prompt variations and retrieval conditions surfacing reliability gaps in real-world RAG deployments.

VetNavi

RAG-Based Veteran Transition Platform

MSAI capstone under Dr. Arthur Choi. A retrieval-augmented generation platform helping veterans navigate the transition to civilian careers AI in direct service of an underserved population.

Accountability isn't a feature.
It's the foundation.

The organizations I work with aren't looking for someone who will tell them what they want to hear. They're looking for a partner who will tell them the truth, show them the data, and help them build something they're proud to put their name on.

I move between a whitepaper and a deployment architecture without losing coherence at either end. I know when each one is what the moment calls for and I know the difference between a framework that survives academic review and one that survives contact with a production system.

The difference between a good AI outcome and a catastrophic one usually comes down to whether accountability was designed in from the beginning. I help organizations make that choice deliberately, before they need to.

01

Accountability cannot be optional

Systems that govern themselves will govern themselves in their own interest. External accountability isn't a constraint on innovation it's what makes innovation trustworthy.

02

Reproducibility is integrity

Research that can't be independently checked isn't research it's marketing. Every framework I publish is designed for verification by anyone willing to do the work.

03

Equity is designed, not assumed

Neutral systems are not neutral. The absence of deliberate equity design is itself a design choice one that tends to benefit whoever already holds power.

04

Theory must survive the real world

Frameworks that only hold in controlled conditions aren't ready for deployment. I test against the messy, adversarial conditions where the work actually needs to perform.

Let's start a
conversation.

Crystal Tubbs
AI Researcher · Founder · Governance Architect

Whether you're navigating AI adoption, blockchain investments, or automating workflows I'm here to simplify the journey and turn mountains into manageable molehills.

St. Petersburg, FL · Available Globally