Back to all posts
AI PolicyMarch 10, 20265 min read

Getting Started in AI Ethics Policy

You don't need a law degree to work in AI ethics. You need to understand how systems fail - and how to write the rules that prevent it.

ai-ethicsai-policygovernanceai-governancecareer

Getting Started in AI Ethics Policy

The Question Nobody Asked

I've spent 15 years building software systems, checkout flows serving millions, biometric authentication platforms, and AI agents that SSH into my homelab to execute tasks autonomously, but no product manager ever asked me, "What if this system decides wrong on someone?" I built Vargos, which is an AI agent who can access infrastructure, prepare tools, and execute jobs via WhatsApp. It is currently running without any policy documents, bias testing, risk assessments, or documented boundaries.

What AI Ethics Policy Actually Is

This is not about philosophy, this is not about social movements, and it's not just about danger management, but it's about systems making decisions on behalf of people. When artificial intelligence systems determine who should be approved for loans, who should get jobs, who should be considered a fraudster, then there needs to be a set of rules about how the system should work, how to test it, and what to do when the system fails. The work looks like:

  • Writing internal policy for how AI systems get evaluated before deployment
  • Running bias audits on models that touch hiring, lending, or healthcare
  • Mapping compliance requirements across jurisdictions (the EU has different rules than the US)
  • Translating technological risk into language executives and regulators understand

The Frameworks That Matter

Three frameworks form the backbone of AI governance globally. I won't cover them thoroughly—the references section links to the main sources—but here's what you need to know.

EU AI Act

The European Commission has proposed a plan by the end of 2025 that could postpone the enforcement of the Annex III high-risk obligations until December 2027 through the Digital Omnibus Regulation. Most obligations, including high-risk rules for employment screening, credit scoring, and medical diagnostics, are currently due by August 2026 and require conformity assessments, human oversight, and documentation. This is the most substantial AI regulation in the world, which entered into force in August 2024. Please check the current situation before adopting this schedule because the August 2026 date may shift if you are a developer or supplier of artificial intelligence in a market that is relevant to Europe. This affects you if you build or deploy AI in any market that touches Europe.

NIST AI Risk Management Framework

The US answer — voluntary, but widely adopted. Released in 2023, with a Generative AI Profile added in July 2024. Four functions: Govern, Map, Measure, Manage. No enforcement teeth, but it is the framework US enterprises reference when they need to show they have thought about AI risk.

OECD AI Principles

Updated May 2024, adopted by 47 countries. Five pillars covering inclusive growth, human rights, transparency, robustness, and accountability. The 2024 update added environmental sustainability and misinformation as explicit concerns — reflecting the energy costs of large language models and the risks of AI-generated content at scale.

"But I'm an Engineer, Not a Policy Person"

I thought the same thing. Then I realized I'd been doing policy-adjacent work for years without calling it that.

  • Systems thinking involves understanding how components interact and how failures cascade; policy is simply applying this approach to organizations rather than code.
  • Risk assessment - you already think about failure modes; AI policy formalizes this into auditable documentation.
  • Technical communication - translating "the model has a 12% false positive rate on underrepresented demographics" into language a board can act on.

What you'll need to learn: policy writing that survives legal review, mapping compliance from regulation to technical controls, and stakeholder management across engineering, legal, product, and execs. But the foundation? You already have it.

First Steps

You don't need a certification to get started, but you do need portfolio pieces that demonstrate competence.

1. Audit an AI tool you use. Pick one of the following: a hiring platform, a content moderation tool, or a recommendation engine. Document what data it uses and what decisions it makes, what could go wrong and what control measures are available, and write it down in the form of a structured risk assessment.

2. Write a sample policy. Draft an internal AI usage policy for a hypothetical company, covering data privacy, bias testing, human oversight requirements, and incident response using the NIST AI RMF as your structure.

3. Study a real failure. Amazon's recruiting tool is a great example, which penalized female applicants because it was trained on male-dominated data. Analyze what policy controls could have prevented this.

4. Follow the regulation. Familiarize yourself with the EU AI Act implementation timeline through August 2027 to understand current and upcoming enforceable regulations.

Why This, Why Now

Most of the major efforts today are to put AI to work, but no one is asking the hard enough questions about how those systems will fail, who will be affected, and what the legal issues are. I've spent fifteen years building systems that affect people, but in the past I've never questioned what happens when a system fails, so now I'm starting to ask questions and learn how to draft regulations so that the answer "we didn't know it was biased" becomes an unacceptable answer.


This is the first in a series on AI ethics policy. Next: four cases where AI hiring tools discriminated against candidates—and what a proper audit would have caught.

References