Projects
These projects test one thesis: frontier AI becomes valuable when it survives deployment reality.
The artifacts below are not a chronological project dump. They are proof points for a specific kind of engineering taste: source-backed workflows, observable infrastructure, public-data sensemaking, and systems designed around human trust.
Featured Builds
ATO Copilot
Source-backed AI for government authorization workflows. ATO Copilot is a hackathon prototype that turns synthetic evidence artifacts into structured control analysis, reviewer questions, recommended actions, and provenance traces.
- What it is: a workflow-native AI prototype for evidence review, not a generic chatbot over documents.
- What it proves: high-trust AI systems should be artifact-driven, evidence-backed, and designed to preserve human judgment.
- Why it matters: as agentic software increases delivery velocity, the bottleneck shifts from writing code to proving that systems can be trusted.
- Links: Project page · Writeup · Source
Bare-Metal AI Lab
Local AI infrastructure lab for model serving, observability, and deployment experiments. A bare-metal environment for operating model capability below the API layer: vLLM-compatible serving, GPU telemetry, Prometheus-compatible metrics, Grafana-style dashboarding, and local experiments across RTX 3090 and Blackwell-class desktop AI hardware.
- What it is: an operator lab for model serving, quantization, telemetry, service replacement, and failure recovery.
- Models: Gemma 4 31B, Gemma 4 26B A4B variants, Qwen 35B-class FP8/MoE, Whisper Large v3, OpenVLA 7B.
- What it proves: infrastructure judgment comes from operating real compute, watching real failure modes, and understanding what managed cloud abstracts away.
- Links: Project page
RFP Map
Spatial market intelligence interface for public-sector demand. RFP Map turns SAM.gov opportunities into a mobile-first browsing interface for exploring agencies, themes, opportunity clusters, and source-linked contract pages without needing to understand procurement search syntax.
- What it is: a public-data interface for scanning federal market terrain before narrowing into search.
- What it proves: messy public data can become something to explore, cluster, and reason over, not just query.
- Why it matters: public data is often technically available but practically illegible; the interface is the difference between access and usable understanding.
- Links: Project page · Live · Source
Research Foundation
Before production AI systems, I worked on deep learning, computer vision, and medical image analysis. That research background informs how I think about model behavior, evaluation, and the gap between benchmark performance and deployed reliability. See Publications or Google Scholar.
Current Build Direction
The next set of projects will keep pushing on the same question: how do frontier models become reliable tools inside constrained, high-trust workflows?
- source-backed compliance and authorization workflows
- observable AI infrastructure below the API layer
- public-data interfaces for government market intelligence
- agentic systems with evidence, provenance, and human review built in
