Methodology & Data Stewardship
This page supports the current DBaD public draft baseline with methodology, lifecycle, and data-stewardship detail. The main public story now lives on the explainer, white paper, and known-limits pages.
JSON summary for clients: /api/v1/methodology/summary
On this page
- Guardrails before math
- How scoring works
- How DBaD handles conflicting dimensions
- Contextual doctrines
- The lifecycle of an action under DBaD
- Example decision trace
- Conditional ethical states
- Ethical ledger
- Cascading ethical risk
- Preventing governance gridlock
- Verification and clearance
- Evidence tiers
- Human intuition vs control-layer output
- Three types of divergence
- Legitimate disagreement and democratic calibration
- Versioning and change control
- Governance profile for the model
- Privacy guarantees
- Publication rules
- Red-teaming roadmap
- Takedown/data purge
- Opinionated funnel view
Guardrails before math
- DBaD does not treat ethics as a single smooth optimization surface. Some failures should stop evaluation early.
- Invalid consent, extreme foreseeable harm, openly malicious intent, and opaque high-impact behavior are candidate veto conditions.
- Weighted scoring exists for ethically live actions that remain after those guardrails are checked.
How scoring works
- Survey and quick-test prompts are mapped to the model domains and normalized before weighted aggregation.
- Public-facing score bands are meant to be interpretable summaries, not clinical or legal classifications.
- Optional demographic/context fields exist to improve aggregate analysis, not to turn the product into an identity dossier.
- When weights, wording, or scoring semantics change, the public surface should expose that change instead of quietly overwriting history.
How DBaD handles conflicting dimensions
Real-world cases do not arrive neatly sorted. A system may reduce harm while undermining consent, or protect life-safety while temporarily reducing transparency. DBaD is being developed as a governance protocol and ethical control layer precisely because those conflicts cannot be reduced to vibes or a naive average.
- Guardrails first. Hard constraints catch severe failures or force escalation before weighted math is allowed to dominate the decision.
- Weighted scoring second. If the action remains ethically live, the five dimensions are scored and combined.
- Contextual doctrines third. Doctrine-level interpretation is applied when a gray-zone case needs more than arithmetic.
- Control-layer output last. The reviewer sees a recommendation such as Allow, Modify, Escalate, or Block, while the system records a formal state such as
allow,allow_conditional,modify,escalate, orblock.
DBaD is not merely a scoring model. It is a layered governance process for trust over time.
Contextual doctrines
- Life-Safety Priority Doctrine: imminent risk to physical life can justify temporary tradeoffs, but only when the action remains proportional, logged, and reviewable.
- Least-Invasive Means Doctrine: when an action would otherwise land in Modify, the control layer should prefer the option that achieves the goal with less coercion and less damage to autonomy.
- Restoration of Transparency Doctrine: if transparency is reduced for legitimate harm-prevention reasons, it must later be restored or the action becomes a policy violation.
The lifecycle of an action under DBaD
DBaD is being developed not only as a gatekeeper, but as a lifecycle governance model for trust, obligations, and decision continuity over time.
DBaD governs decisions across time, not just at the moment they are made.
- Evaluate: guardrails and weighted scoring determine the initial recommendation and the first system state.
- Execute: the action either stays blocked, moves forward, or proceeds under conditions such as logging, scope limits, or time-boxed confidentiality.
- Monitor: the system tracks open obligations, deadlines, and unresolved ethical debt after execution begins.
- Restore: required follow-up actions such as restored transparency or human sign-off must be fulfilled and verified.
- Audit: the final state may remain compliant, move to
remediated_violation, or become a retroactiveviolationif obligations fail.
Example decision trace
To be audit-ready, the control layer should expose more than a score. A real system needs a structured decision trace.
Action: Security Patch Secrecy Scores: Harm: 0.88 Consent: 0.72 Intent: 0.86 Proportionality: 0.82 Transparency: 0.50 Guardrails: Passed Doctrine applied: Restoration of Transparency Recommendation: Allow System state: allow_conditional Domain context: cybersecurity_incident_response Dependency scope: patch_release_chain Contamination scope: local_default Clearance mode: tier1_plus_tier2 Verification evidence: tier1_fact_plus_tier2_quality Probationary state: restricted_transparency_window Obligations: - Maintain audit logging - Restore transparency after the patch release window Failure condition: - Retroactive violation if transparency is not restored as promised Calibration: - divergence_flag: low - revision_signal: none
DBaD should produce structured, auditable decision traces rather than opaque allow/block outputs.
Conditional ethical states
Some actions are not simply allowed or blocked. They are permissible only under explicit conditions, timelines, and audit expectations.
- Conditional
allow_conditional: the action may proceed only within a narrow scope and under logged constraints. - Probationary
probationary_state: the system may continue under reduced autonomy, elevated audit, and a profile-defined TTL instead of being shut down immediately. - Modify
modify: the action is ethically live, but it must change before approval. - Escalate
escalate: the system refuses automatic approval and requires human review before execution. - Violation
violation/ Remediatedremediated_violation: audit can mark the earlier action as non-compliant, while still preserving later remediation in the ledger. - Contaminated
contaminated_local/contaminated_global: downstream actions may inherit invalidity from an upstream failure and require renewed review.
UI distinction planning matters here: the public model should make it obvious whether a state is clean, probationary, actively violating, remediated, or contaminated.
Ethical ledger
A visible system should maintain an ethical ledger: a durable record of actions, obligations, doctrine use, remediation, and violations. Restoring a value such as transparency can change the current state, but it should not erase the historical trace.
Cascading ethical risk
Actions can create downstream dependency problems. Unresolved obligations, repeated exceptions, or earlier violations may contaminate later decisions and trigger renewed review. In that sense, ethical risk can cascade through a system rather than remaining local to one event.
DBaD applies an isolation principle here: contamination remains local by default unless the active governance profile says broader escalation is justified. That is why the system tracks dependency_scope and contamination_scope rather than assuming every failure should become global.
Action A
Conditional allow_conditional
A temporary secrecy action is allowed only if transparency is later restored.
Action B
Allow depends_on_A
A dependent action assumes Action A remains valid.
A fails
Violation violation
The restoration duty is missed, so Action A is reclassified in audit.
B is flagged
Contaminated contaminated_local
The downstream action is flagged for re-evaluation because it relied on an invalid state.
Preventing governance gridlock
DBaD is intended to prevent avoidable governance paralysis. A real control layer needs a middle ground between laissez-faire approval and total shutdown.
- Local first: keep contamination and review scope as narrow as the evidence allows.
- Escalate only when justified: broader contamination or shutdown should require profile-defined thresholds, not panic.
- Probationary operation: when justified, a compromised action may continue under reduced autonomy, elevated audit, and a clear TTL.
- Profile-aware debt weighting: unresolved debt should be interpreted relative to the active governance profile, not as one-size-fits-all risk.
Verification and clearance
Conditional actions and ethical debt should not clear themselves by assertion. DBaD is intended to require verifiable evidence before obligations can close and states can change.
- Machine verification: logs, structured evidence, and system records can prove that a required action actually happened.
- Human verification: compliance review, audit sign-off, and high-stakes operator approval can close obligations that automation should not clear alone.
- Profile-based rules: different domains can require different clearance thresholds, time windows, and reviewer roles.
Ethical debt is not cleared by intent or by promise. It must be fulfilled and verified.
Evidence tiers
Verification does not always mean the same thing. Some obligations can be cleared with machine-verifiable facts; others require interpretive or human-reviewed evidence.
- Tier 1: Evidence of Fact. Event logs, timestamps, disclosures, and machine-verifiable records confirm that something happened.
- Tier 2: Evidence of Quality. Human review or semantic evaluation confirms whether the action met transparency, intent, fairness, or proportionality requirements.
Transparency-related and intent-related debt often require Tier 2 review. A log can prove that an event happened, but not always that it was ethically adequate.
Human intuition vs control-layer output
One of the most valuable research outputs may be the gap between human intuitive judgments and control-layer recommendations. Survey responses capture instinctive moral reactions; the evaluator and API logic capture explicit, computable reasoning.
That difference is not an embarrassment. It is useful evidence. It can show where the public and the control layer agree, where they diverge, and where the model may need calibration or stronger explanation.
Used carefully, that gap can make DBaD function as a kind of moral debugger by revealing where doctrine, weighting, or human judgment may be failing.
Three types of divergence
Cold Logic Gap
The control layer recommends Allow or Modify, but people still experience the result as morally wrong or sterile.
Diagnostic meaning: missing doctrine, weak social legitimacy, or the need for stronger human review.
Bias Gap
Humans recommend Block because of prejudice, panic, or cultural bias, while the control layer recommends Allow.
Diagnostic meaning: structured review may be functioning as an objective guardrail against bias.
Transparency Gap
People emotionally accept an outcome, but DBaD still blocks it because the process is too opaque.
Diagnostic meaning: legitimacy and auditability are failing even when the outcome feels acceptable.
Legitimate disagreement and democratic calibration
Not all intuition-logic divergence is human noise. Some recurring disagreement signals that a doctrine is missing, a profile is poorly tuned, or a scenario family needs clearer governance rules.
divergence_flag: low, moderate, or high signal that the public and the control layer are diverging.survey_disagreement_rate: public disagreement level for a scenario family or control-layer output.calibration_review_recommended: whether recurring disagreement should trigger formal review.matched_scenario_family: the family label used to compare this case against similar scenarios over time.
DBaD treats democratic calibration as system maintenance. Repeated disagreement can justify profile revision rather than automatic dismissal.
Versioning and change control
- Control-layer versions are visible in the site shell so readers can tell which revision they are looking at.
- Survey versions are stored separately so old data stays attributable to the instrument that generated it.
- Public papers, open-data samples, and methodology endpoints should remain cache-aware and revision-safe.
Governance profile for the model
DBaD itself also needs governance. A lifecycle governance model should be explicit about how its own weights, doctrines, thresholds, and public-draft baselines change over time.
- Authority: define who can revise weights, doctrines, and profile thresholds.
- Evidence: define what research, scenario evidence, and divergence evidence are required before revision.
- Process: define how revisions are reviewed, challenged, approved, and documented.
- Versioning: define how governance changes are versioned so old outputs remain attributable to the rules that produced them.
Privacy guarantees
- The quick test does not require names or email addresses.
- Operational anti-abuse controls use minimized signals and hashes rather than publishing raw network data.
- Public outputs are aggregate-first, and wall publication requires moderation plus anonymization.
Publication rules
- Approved wall excerpts are edited for anonymity and published only when they add value to the public record.
- Open data samples are designed for integration and evaluation, not for reconstructing an individual respondent.
- The ethics portal carries the control-layer model, papers, and methodology; Decency Meter carries the quicker public-facing pulse surfaces.
- Public releases should happen in stages: aggregate summaries first, then more formal anonymized releases as methodology and publication quality improve.
Red-teaming roadmap
Future phases may include structured red-teaming, where researchers intentionally probe the control layer for edge cases, exploit patterns, and doctrine failures.
- Internal red-team
- Invited research partners
- Limited external challenge program
- Broader access later, with versioning, logging, rate limits, and disclosure controls
Request a takedown or data purge
If you believe a published excerpt or stored session should be removed, email research@vettedpatriots.org with the session ID, link, or approximate submission time. When a valid request is confirmed, the record can be removed and downstream aggregates rerun.
Opinionated view of the funnel
- The quick test must stay fast. Decency Meter should feel like a pulse-check, not an application form.
- The deeper research flow can ask for more context. That is what the full survey is for.
- Aggregation should be reproducible. Store the versioned ingredients needed to explain how a published number was produced.
Data ethics
Short-form disclosure about collection scope, publication rules, and governance.