About DBaD
Background, mission, and research posture for DBaD as a public draft governance protocol for trust over time.
DBaD has evolved into a governance protocol for trust over time.
Mission
The Don't-Be-a-Dick (DBaD) model began as a blunt moral shorthand and is being turned into a public, inspectable governance protocol. The ambition is modest on purpose: make the reasoning explicit enough that critics can find the weak points.
DBaD is not here to replace philosophy, law, medicine, or human judgment. It is here to provide transparent guardrails, structured scoring, and auditable traces so ethical claims can be compared, stress-tested, and revised in public.
In the current public draft baseline, the focus is trust propagation across decisions, actors, and time: how trust continues, where it should stop, and what must remain visible in the trace when governance pressure increases.
On this page
- Mission
- What problem this is trying to solve
- Where the project is now
- The five domains
- Guardrails before scoring
- Conflicting dimensions and gray zones
- DBaD and AI governance
- Across time, not just at one moment
- Structured decision traces
- Ethical ledger
- Dependency chains and cascading risk
- Verification and clearance
- Research direction
- Origins and evolution
- Applications
- Falsifiability and revision
- Participation and transparency
What problem this is trying to solve
Most people already use a rough internal version of DBaD: they ask whether an action is cruel, manipulative, deceptive, wildly disproportionate, or plainly disrespectful of consent. What is usually missing is a shared structure for explaining why one judgment beat another.
DBaD tries to bridge that gap. It gives people a common language for saying, “this failed because it caused too much harm,” or “this looked efficient but failed because the consent structure collapsed.” The model matters only if that language makes disagreement sharper and more honest.
Where the project is now
DBaD no longer lives only as a moral shorthand or a background idea. It is being shaped into a governance protocol for trust over time: a way to review proposed actions, apply guardrails, return a structured recommendation, and keep that recommendation open to later audit.
That is also why the project now reads more like a lifecycle governance model than a framework exploration or static scorecard. The aim is to make ethical review legible enough to inspect, practical enough to use, and structured enough to test against real scenarios.
Decency Meter is the public signal layer; DBaD is the underlying ethical control layer and governance model.
The Five Domains as Supporting Mechanism
- Harm (H): the expected net harm or suffering an action causes.
- Consent (C): the degree of autonomy and informed choice for those affected.
- Intent (I): the moral direction of purpose — malicious, neutral, or benevolent.
- Proportionality (P): whether the response fits the scale and context of the situation.
- Transparency (T): openness to scrutiny, audit, and explanation.
These five variables form the working space of the model. Their weights can change by domain or revision, but the point is always the same: make the tradeoffs visible instead of smuggling them in under vibes or status. In the current public baseline, they support a larger trust-over-time governance process rather than serving as the whole public identity of DBaD.
Guardrails before scoring
DBaD is not pure arithmetic. Some failures should force a stop before weighted math starts. Invalid consent, catastrophic foreseeable harm, deliberate cruelty, and refusal to explain high-impact actions are all candidate guardrails.
After that, weighted scoring helps compare options that are still morally live. That sequence matters. Otherwise the control layer would look like an optimization engine for polishing obvious abuse.
Conflicting Dimensions and Gray Zones
DBaD matters most when the dimensions pull in different directions. A system may serve a good goal while using manipulative means, or reduce harm while temporarily limiting transparency. Those are not edge cases to hide from. They are the cases that make a control layer worth building.
That is why the control layer is moving toward a layered structure: guardrails first, weighted scoring second, contextual doctrines where needed, and human review when the conflict is too significant for automatic approval. Gray-zone scenarios are now part of the public site and demo for exactly that reason.
DBaD and AI Governance
DBaD was not created as a branding exercise for AI ethics. It emerged from a simpler question: can ordinary moral reasoning be made explicit enough to inspect? That same question now matters in AI governance. When systems act at scale, vague ethical language is not enough. Review logic has to become visible, debatable, and testable.
That makes DBaD relevant as a policy-adjacent control layer, an interpretable pre-execution review model, a post-hoc audit surface, and a decision-support tool for regulated environments.
For the current public baseline, start with DBaD Explained, then read the white paper v3 and known limits. This page remains supporting background and reference context.
Across time, not just at one moment
DBaD governs decisions across time, not just at the moment they are made.
That matters because some actions are not simply allowed or blocked once and forgotten. They may enter conditional states, carry restoration requirements, and later be reviewed again in audit if those requirements were not met.
In plain terms: an action can look acceptable at first, remain acceptable only under conditions, and later be reclassified if those conditions fail.
Structured decision traces
DBaD is meant to produce structured decision traces, not opaque verdicts. A serious review system should be able to show what scores were used, which guardrails mattered, whether a doctrine changed the interpretation, what obligations were created, and what the current or final status became.
This is part of what makes the project feel more like governance infrastructure than a slogan. The point is not only to say what the answer was, but to show how the answer was reached and what still remains open.
Ethical ledger
DBaD is moving toward an ethical ledger model. That means actions are not forgotten once they happen. Violations, remediation, and status transitions remain part of the history.
Restoration matters, but restoration does not erase the record. It can change the current state without pretending the earlier problem never existed.
Dependency chains and cascading risk
Some actions depend on earlier actions staying valid. If the earlier action later fails, downstream actions may need to be reviewed again. That is part of why DBaD is becoming a lifecycle governance model instead of staying a one-moment scoring exercise.
In other words, ethical risk can propagate through a chain of decisions. A system that ignores that chain will miss part of the real governance problem.
Verification and clearance
Conditional states and ethical debt do not clear themselves. DBaD assumes that follow-up duties must be verified before a state can truly close.
That verification may come from machine evidence such as logs, from human review, or from profile-based rules that define what kind of clearance a domain requires.
Research direction
One of the strongest research directions now is the gap between human intuitive judgments and control-layer recommendations. That gap may reveal where moral intuition and explicit logic diverge, which is useful for both governance design and system revision.
It may also surface divergence types around trust, remediation, and even forgiveness: when people accept a repaired action, when they still do not trust it, and when the system should or should not treat remediation as enough. Those questions are part of what makes DBaD testable rather than merely declarative.
Origins and Evolution
DBaD began as a field shorthand for “don't violate basic decency.” As AI alignment debates, public trust problems, and everyday governance questions converged, that shorthand became a candidate governance protocol and ethical control layer: E(A), a structured way to compare ethically live actions.
The project is still in motion. The ethics portal holds the control-layer model, papers, methodology, and revision notes. Decency Meter is the public pulse-check built on top of that work. The two sites are related, but they are not the same product.
Applications
- Ethics education: teach people how to articulate tradeoffs instead of hand-waving them.
- AI alignment: expose where a system is optimizing around decency rather than respecting it.
- Decision review: score whether an AI recommendation should be actioned automatically or escalated for human review.
- Policy analysis: compare content-moderation, access-control, or public policy options against explicit harm, consent, and proportionality claims.
- Everyday conduct: provide a disciplined version of the common-sense mirror test.
These use cases only matter if the control layer stays legible to outsiders. A model nobody can audit is branding, not ethics infrastructure.
Falsifiability and Revision
DBaD is a working hypothesis, not a revealed truth. It should be judged by where it clarifies hard cases, where it collapses under counterexample, and whether revisions make the control layer better rather than merely more complicated.
That is why versioning, public papers, methodology notes, and scenario intake matter. If the control layer fails on recurring edge cases, the correct response is revision, not defense by slogan.
Participation & Transparency
The ethics portal exists to make the work inspectable. The papers, methodology pages, API docs, and public-facing updates all support that goal. Decency Meter contributes public-response data and a lightweight entry point, but the control-layer work remains here.
Participation can mean several different things: read the model, critique the assumptions, submit a scenario, use the research survey, or build responsibly on the public API. Open inquiry is the operating requirement, not a decorative value.
Deeper readers should continue into methodology, the research demo, API docs, and papers.
Where to go next
Use the main public path to understand the current DBaD baseline.
DBaD Explained · White paper v3 · Known limits · Research demo
Decency Meter
Visit the public pulse-check when you want the fast user-facing survey surface.