Back to all posts
AI PolicyMarch 10, 20265 min read

Getting Started in AI Ethics Policy

You don't need a law degree to work in AI ethics. You need to understand how systems fail — and how to write the rules that prevent it.

ai-ethicsai-policygovernanceai-governancecareer

Getting Started in AI Ethics Policy

The Question Nobody Asked

I've spent 15 years building software systems. Checkout flows serving millions. Biometric identity platforms. AI agents that SSH into my homelab and execute tasks autonomously.

Not once in those 15 years did a product manager ask me: "What happens when this system is wrong about someone?"

I built Vargos — an AI agent with SSH access to my infrastructure, tool orchestration, and autonomous task execution via WhatsApp. It works. It's also running without a single policy document governing what it can and can't do. No bias testing. No risk assessment. No documented boundaries.

That's fine for a homelab. It's not fine for a system making decisions about people's livelihoods.

That gap is what AI ethics policy exists to fill.

What AI Ethics Policy Actually Is

It's not philosophy. It's not activism. It's risk management for systems that make decisions about people.

When an AI system decides who gets a loan, who gets hired, who gets flagged for fraud — someone needs to have written down the rules for how that system should behave, how it gets tested, and what happens when it breaks.

The work looks like:

  • Writing internal policies for how AI systems get evaluated before deployment
  • Running bias audits on models that touch hiring, lending, or healthcare
  • Mapping compliance requirements across jurisdictions (the EU has different rules than the US)
  • Translating technical risk into language that executives and regulators understand

The Frameworks That Matter

Three frameworks form the backbone of AI governance globally. I won't cover them exhaustively — the references section links to the primary sources — but here's what you need to know to be dangerous.

EU AI Act

The most significant AI regulation in the world. Entered into force August 2024, fully enforceable August 2026. It classifies AI into four risk tiers — from banned outright (social scoring, predictive policing) to unregulated (spam filters, chatbots). Employment screening, credit scoring, and medical diagnostics land in the high-risk tier, which requires conformity assessments, human oversight, and detailed documentation.

If you build or deploy AI in any market that touches Europe, this affects you.

NIST AI Risk Management Framework

The US answer — voluntary, but widely adopted. Released in 2023, with a Generative AI Profile added in July 2024. Four functions: Govern, Map, Measure, Manage. No enforcement teeth, but it's the framework US enterprises reference when they need to show they've thought about AI risk.

OECD AI Principles

Updated May 2024, adopted by 47 countries. Five pillars covering inclusive growth, human rights, transparency, robustness, and accountability. The 2024 update added environmental sustainability and misinformation as explicit concerns — reflecting the energy costs of large language models and the risks of AI-generated content at scale.

"But I'm an Engineer, Not a Policy Person"

I thought the same thing. Then I realised I'd been doing policy-adjacent work for years without calling it that.

  • Systems thinking — how components interact, where failures cascade. Policy is just systems thinking applied to organisations instead of code.
  • Risk assessment — you already think about failure modes. AI policy formalises that into auditable documentation.
  • Technical communication — translating "the model has a 12% false positive rate on underrepresented demographics" into language a board can act on.

What you need to learn: policy writing that survives legal review, compliance mapping from regulation to technical controls, and stakeholder management across engineering, legal, product, and executive teams. But the foundation? You already have it.

First Steps

You don't need a certification to start. You need portfolio pieces that demonstrate competence.

1. Audit an AI tool you use. Pick something — a hiring platform, a content moderation tool, a recommendation engine. Document what data it uses, what decisions it makes, what could go wrong, and what controls exist. Write it up as a structured risk assessment.

2. Write a sample policy. Draft an internal AI usage policy for a hypothetical company. Cover data privacy, bias testing, human oversight requirements, and incident response. Use the NIST AI RMF as your structure.

3. Study a real failure. Amazon's recruiting tool is the canonical example — trained on a decade of male-dominated hiring data, it learned to penalise applications containing the word "women's" and downgrade graduates of women's colleges. Understand not just what went wrong, but what policy controls would have caught it.

4. Follow the regulation. The EU AI Act implementation timeline runs through August 2027. Understanding what's enforceable today versus what's coming makes you useful to any company deploying AI in regulated markets.

Why This, Why Now

Every major enterprise is deploying AI. Most of them don't have anyone asking the hard questions about how those systems fail, who they affect, and what the regulatory exposure looks like.

I've spent 15 years building the systems. Now I'm learning to write the rules for how they should behave. Not because engineering wasn't enough — but because the systems I build are making decisions that engineering alone can't govern.

The engineers who understand both the technology and the governance layer are rare. That's the gap. And it's where I'm heading.


This is the first in a series on AI ethics policy. Next: four real cases where AI hiring tools discriminated against candidates — and what a proper audit would have caught.

References