Applied Research

Research

Applied experiments in AI-assisted software engineering, human-in-the-loop autonomous systems, and AI trust & observability.

Real insights, not academia theater. We focus on what works in production.

Focus Areas

Three interconnected research domains that inform all our work.

Focus Area 1

AI-Assisted Software Engineering

Where AI actually helps engineers — and where it fails

AI used in coding, testing, infrastructure, DevOps, and QA. We study tooling, workflows, and evaluation — not "vibe coding" or shallow productivity demos.

Key Questions We Explore

  • ?Where does AI actually help engineers?
  • ?Where does it silently fail?
  • ?How do humans supervise AI-written code?

Research Outputs

  • Research notes
  • Benchmarks
  • Reference architectures
  • Open tooling experiments
Focus Area 2

Human-in-the-Loop Autonomous Systems

Fully autonomous systems are brittle. Human-aware autonomy scales better.

Most people skip the human part. We study AI agents with human checkpoints, override mechanisms, escalation design, and cognitive load on humans.

Key Questions We Explore

  • ?How do we design reliable human checkpoints?
  • ?When should systems escalate to humans?
  • ?What cognitive load is acceptable?

Research Outputs

  • Design patterns
  • Control flow diagrams
  • Case studies
  • Evaluation frameworks
Focus Area 3

Trust, Evaluation & Observability

The glue holding everything together

Measuring correctness beyond accuracy. We study drift detection, confidence signaling, auditability, and how to know when AI systems are actually working.

Key Questions We Explore

  • ?How do we measure correctness beyond accuracy?
  • ?How do we detect when systems drift?
  • ?What makes AI outputs auditable?

Research Outputs

  • Evaluation rubrics
  • Trust checklists
  • Observability dashboards (reference designs)
  • Failure taxonomy

Current Work

What we're actively researching and building.

Research BriefQ1 2026

AI in Software Engineering — Baseline

Mapping current AI-assisted workflows in coding, testing, and DevOps.

AI-Assisted SE
Design PatternsQ1 2026

Human-in-the-Loop Workflow Diagrams

Critical interaction points where human oversight matters.

Human-in-the-Loop
FrameworkQ1 2026

Trust & Observability Metrics v1

Initial metrics for evaluating AI agent reliability.

Trust & Observability

How It All Connects

Research
produces frameworks
AI Literacy
teaches how to use them
DAIP
applies them in real work
Open Engineering
makes everything inspectable

Nothing is isolated. Nothing is performative.

Collaborate With Us

We welcome research collaborations with universities, indie builders, nonprofits, and open-source maintainers. Everything produced here is open by default.