By Laini Byfield

ETHICMAP

An implementation cycle for Small Data Ethics — built for repeatable program operations and continuous learning.

EnvironmentTimingHarmonyIncentivesCalibrationMeasurementApplicationPublish

The ETHICMAP cycle

E — Environment

Where are we operating — constraints, norms, and power dynamics.

T — Timing

When do things happen — cutoffs, lags, retroactivity, and notice.

H — Harmony

For whom does this work — who benefits, who is burdened, who is exposed.

I — Incentives

Why would someone participate — ensure feasibility without coercion.

C — Calibration

What can realistically change — align expectations to capacity and context.

M — Measurement

What moved — include uncertainty and error rates, not just point estimates.

A — Application

How do we sustain and repeat it — standardize without freezing learning.

P — Publish

Formalize learning — document decisions, tradeoffs, and changes across cycles.

Design stance

ETHICMAP treats data errors as moral events — not technical bugs.

Contestability

People can challenge outcomes, submit evidence, and receive timely review.

Traceability

Every outcome can be traced to a source file, load date, and rule version.

Repair

Correction triggers reprocessing — changes are communicated and documented.

ETHICMAP assumes relational exposure — not anonymity. Read: Small Data Ethics is not Small Data Privacy →

How to run ETHICMAP in real life

  • Use ETHICMAP as a standing agenda for monthly or quarterly operations reviews.
  • Keep a change log: rule updates, data source changes, known issues, and decisions.
  • Publish learning internally: what changed, why, and what it means for participants.