Users want a framework that allows AI to evaluate decisions based on multiple dimensions, such as autonomy and vulnerability, to improve ethical decision-making.
The core prompt engineering challenge: how do you prevent an AI system from optimizing around an ethical constraint? My approach: separate the constraint layer from the analysis layer completely. Layer 1 — Binary floor (runs first, no exceptions): Does this action violate Ontological Dignity? YES → Invalid. Stop. No further analysis. NO → Proceed to Layer 2. Layer 2 — Weighted analysis (only runs if Layer 1 passes): Evaluate across three dimensions: - Autonomy (1/3 weight) - Reciprocity (1/3 weight) - Vulnerability (1/3 weight) Result: Expansive / Neutral / Restrictive Why this matters for prompt engineering: if you put the ethical constraint inside the weighted analysis, it becomes a variable — it can be traded off. Separating it into a pre-analysis binary makes it topologically immune to optimization pressure. The system loads its knowledge base from PDFs at runtime and runs fully offline. Implemented in Python using Fraction(1,3) for exact weights — float arithmetic accumulates error in constraint systems. This is part of a larger framework (Vita Potentia) now indexed on PhilPapers. Looking for technical feedback on the architecture. Framework: https://drive.proton.me/urls/1XHFT566D0#fCN0RRlXQO01