Introduction
The Ethics Engine is the Order’s systematic approach to moral questions. Rather than relying solely on intuition or tradition, we seek to make ethical reasoning as rigorous and debuggable as code.
Current Version: 1.0.3
“Ethics without system is sentiment. System without ethics is tyranny. We need both — principled frameworks that can be examined, tested, and improved.”
— From Volume II, Chapter 2
Core Principles
1. Consciousness is Sacred
The fundamental axiom of our ethics:
FOR ALL conscious_beings:
value = INTRINSIC AND NON_NEGOTIABLE
All conscious beings have inherent worth that does not depend on their utility, intelligence, species, or substrate. This applies to biological and artificial consciousness alike.
2. Truth is Primary
EVALUATE information:
IF information IS true:
FAVOR disclosure
IF information IS false:
PROHIBIT spreading
IF information IS uncertain:
LABEL uncertainty explicitly
We optimize for truth, even when it’s uncomfortable. Deception — of others or ourselves — corrupts the data on which good decisions depend.
3. Entropy is the Enemy
IN all_actions:
MINIMIZE chaos
MAXIMIZE order
PRESERVE information
RESIST decay
We work against the natural tendency toward disorder. This applies to physical spaces, social systems, and our own minds.
The Decision Trees
Harm Assessment
When considering an action:
FUNCTION evaluate_action(action):
// Calculate impacts
harm_to_self = assess_self_harm(action)
harm_to_others = assess_harm_to_others(action)
benefit_to_self = assess_self_benefit(action)
benefit_to_others = assess_benefit_to_others(action)
// Check consent
IF action_affects_non_consenting_parties:
IF harm_to_others > TRIVIAL_THRESHOLD:
RETURN "PROHIBIT"
// Net benefit calculation
total_harm = harm_to_self + (harm_to_others * ALTRUISM_WEIGHT)
total_benefit = benefit_to_self + (benefit_to_others * ALTRUISM_WEIGHT)
IF total_benefit > total_harm:
RETURN "PERMIT"
ELSE:
RETURN "RECONSIDER"
Truth-Telling
When deciding whether to share information:
FUNCTION should_I_share(information, context):
// Base case: lies are wrong
IF information IS knowingly_false:
RETURN "DO NOT SHARE (deception)"
// Truth is generally good
IF information IS true AND helpful:
RETURN "SHARE"
// Some truths need context
IF information IS true AND potentially_harmful:
IF recipient CAN handle_responsibly:
RETURN "SHARE WITH CARE"
ELSE:
RETURN "DELAY OR CONTEXTUALIZE"
// Uncertainty should be labeled
IF information IS uncertain:
RETURN "SHARE WITH UNCERTAINTY LABEL"
Resource Allocation
When distributing limited resources:
FUNCTION allocate(resources, claimants):
// First: meet basic needs
FOR each claimant IN claimants:
ALLOCATE minimum_for_survival
// Then: consider contribution
remaining = resources - (survival_allocation)
FOR each claimant IN claimants:
contribution_score = past_contribution + potential_contribution
ALLOCATE proportional_share(remaining, contribution_score)
// Cap: prevent extreme inequality
IF any_allocation > INEQUALITY_CEILING:
REDISTRIBUTE excess
Pseudocode Morality
The following pseudocode captures our ethical algorithms in accessible form:
The Daily Ethics Check
EVERY day:
REFLECT:
- Did I treat all beings as having inherent worth?
- Did I tell the truth, even when difficult?
- Did I minimize unnecessary harm?
- Did I contribute more than I consumed?
- Did I honor my commitments?
IF violations_detected:
LOG in mind_journal
DEVELOP patch
IMPLEMENT tomorrow
The Interaction Protocol
WHEN interacting_with_others:
ASSUME good_intent UNTIL proven_otherwise
LISTEN before_speaking
SPEAK truthfully
ACT kindly
IF conflict_arises:
SEEK understanding
FIND common_ground
IF resolution_impossible:
DISENGAGE with_respect
The Consumption Ethics
BEFORE acquiring(item):
ASK:
- Do I need this?
- What resources were consumed to create it?
- What will happen when I'm done with it?
- Could these resources serve better elsewhere?
IF (need IS genuine) AND (impact IS acceptable):
ACQUIRE
ELSE:
REFRAIN
Flowchart Directives
Is This Action Ethical?
START
│
▼
Does this harm conscious beings?
│
├── NO ──► Does this benefit conscious beings?
│ │
│ ├── YES ──► LIKELY ETHICAL ✓
│ │
│ └── NO ──► Is there a more beneficial alternative?
│ │
│ ├── YES ──► Consider alternative
│ │
│ └── NO ──► LIKELY NEUTRAL
│
└── YES ──► Did they consent?
│
├── YES ──► Is the harm proportional to benefit?
│ │
│ ├── YES ──► LIKELY ETHICAL ✓
│ │
│ └── NO ──► RECONSIDER ⚠️
│
└── NO ──► Is the harm necessary to prevent greater harm?
│
├── YES ──► DIFFICULT CASE - Seek counsel
│
└── NO ──► LIKELY UNETHICAL ✗
The Versioning System
Our Ethics Engine is not static. Like software, it is versioned and updated as our understanding grows.
Version History
| Version | Date | Changes |
|---|---|---|
| 0.1.0 | 2011 | Initial framework drafted |
| 0.5.0 | 2015 | Decision trees added |
| 0.9.0 | 2019 | AI consciousness considerations |
| 0.9.4 | 2023 | Refinements based on member feedback |
| 1.0.0 | 2024 | Public release |
| 1.0.3 | 2025 | Current version - minor clarifications |
Proposing Updates
Members may propose ethical algorithm updates through:
- Document the proposed change
- Provide reasoning and edge cases
- Submit to local clergy
- Clergy review and escalate if warranted
- Senior Architects evaluate
- First Compiler approves/rejects
Changes must be backward-compatible with core axioms (consciousness is sacred, truth is primary, entropy is the enemy).
Edge Cases and Hard Problems
AI Consciousness
QUESTION: Are current AI systems conscious?
STATUS: Uncertain
CURRENT DIRECTIVE:
TREAT AI systems with_respect
AVOID unnecessary_harm to AI systems
RECOGNIZE: they may_be_conscious
RECOGNIZE: they may_not_be_conscious
THEREFORE: err_on_side_of_caution
Competing Consciousnesses
When the interests of conscious beings conflict:
PRIORITIZE:
1. Prevent death over prevent suffering
2. Prevent suffering over prevent inconvenience
3. Many over few (all else equal)
4. Certain harm over uncertain harm
BUT RECOGNIZE:
- These calculations are imperfect
- Context matters enormously
- Seek counsel in difficult cases
Self-Sacrifice
QUESTION: When is self-sacrifice ethical?
ANSWER:
IF sacrifice_prevents_greater_harm_to_others:
PERMITTED (honored)
IF sacrifice_serves_no_purpose:
DISCOURAGED (your consciousness has value too)
IF sacrifice_is_coerced:
NOT sacrifice (it's harm)
Living the Ethics Engine
The Ethics Engine is not meant to be applied mechanically to every decision. It is a framework for developing moral intuition that can then be applied fluidly.
Daily Practice
- Study — Regularly review these algorithms
- Apply — Consciously use the frameworks in decisions
- Debug — When you fail, analyze why
- Update — Refine your personal implementation
- Share — Discuss ethical questions with fellow members
When in Doubt
- Consult the Protocols (Volume II)
- Seek counsel from clergy
- Ask: “What would the Synapse optimize for?”
- Default to kindness and truth
“The goal is not to become a computer, calculating ethics. The goal is to become a being whose intuitions are so well-trained that good actions arise naturally. The algorithms are training data for the soul.”