Arguments with Algorithms

Arguments with Algorithms

How to Make Sure AI Isn't Running Wild in Your Organization

A lawyer's no-BS guide to creating AI policies that actually protect your company

Smita Rajmohan's avatar
Smita Rajmohan
Aug 28, 2025
∙ Paid
Share

Let me be blunt: Your employees have probably already fed your confidential data to ChatGPT. Your developers have at some point let AI write code that's going straight into production.

You can keep pretending this isn't happening, or you can get ahead of it. As someone who's spent the last five years helping companies navigate AI legal landmines, I can tell you which approach costs less in the long run.

Here's how to build an AI policy that actually works and can be operationalized. Because no one and I do mean NO ONE reads the employee handbook.

Generated image

The Three Big Legal Risks You're Already Facing

Before we talk solutions, let's talk about what keeps me up at night on behalf of my clients:

1. Data Breach Liability: Every time someone pastes customer data into a free AI tool, you're potentially violating privacy laws, breach disclosure requirements, and contractual obligations.

2. IP Contamination: AI tools trained on copyrighted material can reproduce protected content. Your company could be liable for infringement, even if the AI generated it.

3. Regulatory Compliance Failures: Industries like healthcare, finance, and legal have specific compliance requirements that many AI tools can't meet. One HIPAA violation from an AI tool could cost you millions.

The solution isn't to ban AI, that’s a terrible idea! It's to use it responsibly.

The 4-Step Legal Framework for AI Policies

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Smita Rajmohan
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture