Skip to content

Regulatory: AI autonomy rules tightening

Governments worldwide are developing AI regulation. Rules restricting autonomous agent actions could limit what our products can do.

The Risk

Our products give AI agents real capabilities:

  • SmartBoxes lets agents execute code, deploy websites, and access external APIs
  • Murphy enables agents to monitor and report on project status automatically
  • P4gent allows agents to draft and suggest communications on users’ behalf

Regulatory frameworks like the EU AI Act, potential US federal AI legislation, and sector-specific rules (financial services, healthcare) could impose requirements that limit these capabilities or make them prohibitively expensive to operate.

Specific Threats

  1. Disclosure requirements: Mandatory labeling of AI-generated content
  2. Human-in-the-loop mandates: Regulations requiring human approval before certain AI actions
  3. Sector bans: Prohibition of autonomous AI in specific industries (finance, healthcare, legal)
  4. Liability frameworks: New liability rules that make operating autonomous agents uninsurable
  5. Data residency: Requirements that AI processing happen within specific jurisdictions

Mitigations

Product Design

  • Human-in-the-loop by default: P4gent already requires human approval before sending communications. SmartBoxes has explicit risk acceptance.
  • Audit trails: Nomos Cloud provides complete decision traces that satisfy explainability requirements.
  • Configurable autonomy: Products allow organisations to dial down autonomy to meet their compliance needs.

Business

  • Regulatory monitoring: Track AI legislation across target markets (UK, EU, US)
  • Compliance partnerships: Work with legal and compliance consultants who specialise in AI
  • Industry engagement: Participate in standards bodies and comment on proposed regulations
  • Market flexibility: B2B focus means we can adapt to enterprise compliance requirements

Residual Risk

Regulation is inherently unpredictable. A strict interpretation of “autonomous AI” could require significant product changes. Our best protection is building products that already exceed likely requirements—full auditability, human oversight, and configurable autonomy levels.

Probability: High (AI regulation is coming; severity is uncertain) Impact: Medium-High (could require product changes, but unlikely to be existential) Mitigation effectiveness: Good (we’re building compliance-first)

Mitigated By