Investment FrameworkOriginal Framework

E/D/R Framework

An AI investment framework classifying applications by how much human responsibility transfers to the AI layer: Execution (E), Decision (D), or Responsibility (R). Adoption success depends on organizational readiness to accept each layer's responsibility, not on the underlying model's technical capability.

EC
Ethan Cho
Chief Investment Officer, TheVentures

What is E/D/R Framework?

The E/D/R Framework categorizes AI applications by accountability layer, not by technology level. The core insight is that adoption success depends on whether an organization is ready to accept the responsibility transfer at each layer — not on how capable the underlying model is. **E — Execution AI.** Humans decide WHAT to do; AI only executes. Responsibility stays entirely with the human operator. Examples: code generation, document drafting, data processing, content creation, transcription. This is where most enterprise AI lives today. Low bar for adoption because nothing about the existing accountability structure changes. **D — Decision AI.** Humans define the scope or domain; AI makes judgments or recommendations within that scope. Partial responsibility transfer. Examples: credit scoring, medical diagnostic assistance, investment screening, fraud detection. Adoption requires explainability infrastructure (interpretable models, audit trails) and regulatory frameworks that recognize the new accountability split. **R — Responsibility AI.** Humans set only the high-level goal; AI takes full accountability for outcomes. Examples: autonomous driving, surgical robots, large-scale algorithmic trading at discretion. The frontier — barely exists in regulated industries yet because no institution has fully accepted AI accountability for outcomes at scale. The key contrarian claim: no matter how technically capable an AI system is, placing it at the wrong responsibility layer guarantees failure. The bottleneck for commercial adoption is not model capability but organizational readiness to accept the responsibility transfer that each layer requires. A brilliant autonomous trading system deployed where the bank is not ready to own the losses will fail regardless of Sharpe ratio. A mediocre code copilot deployed as Execution AI will succeed regardless of generation quality.

Practical Application

For VC underwriting of AI startups: first classify the product's responsibility layer (E, D, or R), then evaluate whether the target buyer's organization can accept that layer's responsibility. An 'AI diagnostic startup' classified as D needs to be sold into healthcare systems with explainability-ready compliance teams; the same product sold into regulated hospitals without that infrastructure will fail regardless of diagnostic accuracy. For portfolio construction: E-layer startups have highest deal volume but commoditize fast (low moat, price pressure from foundation model providers). D-layer startups have stronger moats via regulatory infrastructure and integration depth. R-layer startups carry the highest risk but the largest TAM — and only succeed in industries where the responsibility transfer is structurally accepted (e.g. algorithmic trading in quant funds, autonomous mobility in specific geographies and jurisdictions). The mistake most AI investors make: evaluating AI companies on benchmark performance (technical capability) instead of on accountability fit (organizational readiness for the responsibility layer the product actually occupies).

Data Source

Ethan Cho's investment framework at TheVentures (2025-present), developed during AI investment screening across 100+ deals

How to Cite

APA: Cho, E. (2026). E/D/R Framework. VentureOracle. https://ventureoracle.kr/concepts/edr-framework

MLA: Cho, Ethan. “E/D/R Framework.” VentureOracle, 2026, ventureoracle.kr/concepts/edr-framework.

TOPICS

E/D/R FrameworkEDR FrameworkAI investment frameworkAI adoption frameworkAI accountabilityexecution AIdecision AIresponsibility AIAI underwritingEthan ChoTheVentures
← All Concepts