# AI Project Lifecycle Framework (APLF)

## Purpose

To establish a structured, repeatable, enterprise-grade methodology for
engaging AI agents in the delivery of high-reliability outcomes,
including but not limited to:

- Software development
- Systems architecture
- Enterprise analysis
- Automation design
- Strategic planning
- Research and reporting

This framework ensures alignment, reduces ambiguity, mitigates risk, and
improves outcome quality through structured orchestration.

---

## Framework Objective

To transition AI usage from ad hoc prompting to governed,
lifecycle-based execution by introducing:

- Multi-layer context setting\
- Structured discovery\
- Explicit assumption and risk management\
- Formal execution contracts\
- Verification and simplification cycles

---

## Lifecycle Overview

0.  Intent Definition\
1.  Agent Skill Framing\
2.  Working Environment Definition\
3.  Request Context Framing\
4.  Discovery & Research Loop\
5.  Assumption Logging\
6.  Risk & Failure Analysis\
7.  Formal Brief Creation\
8.  Agent Planning\
9.  Implementation\
10. Verification Against Brief\
11. Simplification & Optimisation

---

## Phase 0 --- Intent Definition (Human-Owned)

**Objective:** Clarify the true outcome and level of reliability
required before structured AI engagement.

Deliverable: - Clear outcome statement - Defined success criteria -
Identified impact level (exploratory, production, enterprise-critical)

---

## Phase 1 --- Agent Skill Framing (Mindset Layer)

**Objective:** Define the role, standards, and cognitive posture the AI
must adopt.

Includes: - Role specification (e.g., Senior Backend Architect) -
Experience level simulation - Quality standards - Optimisation
priorities - Thinking style (systematic, adversarial, creative,
conservative)

Deliverable: - Explicit skill framing instruction

---

## Phase 2 --- Working Environment Definition (Constraint Layer)

**Objective:** Define operational constraints and environmental scope.

Includes: - Technology stack - Infrastructure - Deployment targets -
Integration points - Regulatory constraints - Budget/time limitations -
Team capability assumptions

Deliverable: - Structured environment specification

---

## Phase 3 --- Request Context Framing (Task Layer)

**Objective:** Define the task with clarity and bounded scope.

Includes: - Objective - Scope (in/out) - Deliverable format - Definition
of Done - Acceptance criteria - Performance requirements

Deliverable: - Structured task definition

---

## Phase 4 --- Discovery & Research Loop

**Objective:** Reduce ambiguity through iterative clarification.

Process: - Agent asks structured questions - Human refines constraints -
Edge cases identified - Requirements validated

Rule: No implementation until ambiguity is reduced to acceptable levels.

Deliverable: - Refined, clarified requirements

---

## Phase 5 --- Assumption Logging (Stability Layer)

**Objective:** Surface and validate hidden assumptions before execution.

Agent must explicitly list: - Environmental assumptions - Data
assumptions - Behavioural assumptions - Constraint assumptions

Deliverable: - Confirmed assumption register

---

## Phase 6 --- Risk & Failure Analysis (Resilience Layer)

**Objective:** Identify potential failure modes before implementation.

Includes analysis of: - Security risks - Performance bottlenecks -
Scaling limitations - Edge cases - Compliance risks - Operational
dependencies

Deliverable: - Risk register with mitigation strategies

---

## Phase 7 --- Formal Brief Creation (Execution Contract)

**Objective:** Consolidate all validated information into a structured
execution document.

Includes: - Skill framing - Environment definition - Validated
requirements - Assumptions - Risk considerations - Deliverable
specifications

Deliverable: - Master execution brief

---

## Phase 8 --- Agent Planning Phase (Autonomous Decomposition)

**Objective:** Require the agent to plan before acting.

Agent must: - Break task into steps - Identify dependencies - Validate
alignment with constraints - Highlight potential conflicts - Confirm
feasibility

Deliverable: - Structured execution plan (approved before
implementation)

---

## Phase 9 --- Implementation

**Objective:** Execute according to the approved plan.

For code-based work: - Modular design - Error handling - Logging
considerations - Testability - Documentation

For strategic work: - Structured reasoning - Clear outputs - Actionable
recommendations

Deliverable: - Completed implementation

---

## Phase 10 --- Verification Against Brief (Audit Layer)

**Objective:** Validate deliverables against original requirements.

Agent must: - Cross-check output with objective - Validate acceptance
criteria - Identify deviations - Document tradeoffs

Deliverable: - Compliance verification report

---

## Phase 11 --- Simplification & Optimisation (Elegance Layer)

**Objective:** Remove unnecessary complexity while preserving
requirements.

Activities: - Refactor for clarity - Reduce cognitive load - Improve
maintainability - Optimise where appropriate - Remove accidental
architecture

Guiding question: "What is the simplest solution that still satisfies
the brief?"

Deliverable: - Refined final output

---

## Governance Principles

1.  Separate discovery from implementation.
2.  Make assumptions explicit.
3.  Plan before execution.
4.  Verify against original intent.
5.  Simplify before final delivery.
6.  Match process depth to impact level.

---

## Usage Modes

### Full Lifecycle Mode

Used for: - Enterprise systems - Production code - Security-sensitive
implementations - Multi-agent orchestration - Financially impactful
decisions

### Compressed Mode

Used for: - Rapid prototyping - Low-risk tasks - Exploratory analysis

(Compressed Mode typically uses Phases 1--3 and 9.)

---

## Positioning

The AI Project Lifecycle Framework represents a shift from:

Ad hoc prompting

to

Structured AI Systems Governance

It formalises professional AI engagement for enterprise-grade outcomes.
