AI Regulation: It's Just Good Manners
Why the most important AI laws boil down to principles your grandma already taught you
As lawmakers around the world scramble to regulate artificial intelligence, the debates can feel overwhelmingly complex. Technical jargon, liability frameworks, codes of practice, blah blah. Some warn about stifling innovation while others demand protection from algorithmic harm.
But taking a step back from the noise, and looking for signal reveals something remarkable. The the core principles driving AI regulation aren't earth shattering concepts. They're the same basic rules of decent behavior that govern any healthy relationship in a civilized society.
This is what laws tell us to do or not do with respect to AI.
Don't Lie
AI systems shouldn't provide false information or mislead users about their capabilities.
This means chatbots can't claim to be human when they're not. AI-generated content should be labeled as such. Medical AI can't promise miracle cures. Financial AI can't guarantee impossible returns.
The EU's AI Act, California's emerging regulations, and similar frameworks all center on this fundamental principle. If a human providing the same service would be required to be truthful, the AI should be too.
Dating apps using AI to generate fake profiles would violate this principle. So would AI customer service that pretends to be human without disclosure.
Tell It Like It Is
Companies must be clear about what their AI systems actually do and how they make decisions.
This is the "show your work" requirement. If an AI system is making decisions that affect people's lives like job screening, medical diagnoses etc. people have a right to understand the basic logic.
This doesn't mean companies have to reveal trade secrets or proprietary algorithms. It means explaining, in plain language, what factors the system considers and how it reaches conclusions.
A hiring AI should explain that it considers education, experience, and skills assessments and that it does not penalize candidates based on zip codes or names. A credit AI should clarify it looks at payment history and income and not based on social media activity without disclosure. You get the gist.
Don't Deceive
AI systems shouldn't be designed to exploit psychological vulnerabilities or trick people into actions against their interests.
This targets the dark patterns that manipulate human psychology like social media algorithms if they’re designed to maximize addiction. Or let’s say political AI that micro-targets disinformation to vulnerable populations. Or life-like digital persona chatbots targeted to children that aim to manipulate or deceive.
The principle recognizes that AI's power to analyze and predict human behavior creates new opportunities for manipulation that go far beyond traditional advertising. Recommendation algorithms that deliberately promote extreme content to increase engagement, knowing it causes psychological harm would fall under this bucket.
Tell Us What You're Doing With Our Data
Clear disclosure about data collection, use, and sharing, not in legal or technical gibberish, but in language people can actually understand.
This goes beyond having a privacy policy. Companies must explain, in plain terms, what personal information they're collecting, how AI systems use it, and who else gets access.
This includes being honest about data retention, algorithmic training, and the surprising ways AI can infer sensitive information from seemingly innocent data. AI manufacturers must be upfront about which conversations are stored, analyzed, or used to improve AI models.
Tell Us Why You're Doing It
Companies must have legitimate reasons for collecting and using personal data, and they should stick to using that data for those purposes only.
This prevents the scope creep where data collected for one purpose gets repurposed for something entirely different. If you collect location data to provide navigation, you can't secretly use it to infer political affiliations for advertising. Obvio, right?
AI amplifies this concern because machine learning can find unexpected patterns and correlations in data, creating temptation to use information in ways users never anticipated. An education app that collects learning data to personalize lessons can't suddenly start using that same data to predict and sell information about students' career prospects to employers.
You're On the Hook If It Breaks
Companies remain responsible for the outcomes of their AI systems, even when the technology makes autonomous decisions.
You can’t say “the algorithm made me do it" when their AI systems cause harm. Nor can you start pointing fingers to others in the AI supply chain if you mess up. I mean you can - but it may not be the best argument. If your AI hiring tool discriminates, if your AI medical device gives dangerous advice, if your AI trading system crashes markets, you're going to be responsible, though you probably have recourse against the foundational models you built on. Maybe.
This principle requires companies to actively monitor their AI systems, test for unintended consequences, and have humans who can step in when things go wrong. That last part is often called “human in the loop.”
If an AI-powered autonomous vehicle (eg self-driving car) causes an accident due to poor training data or inadequate testing, the manufacturer bears responsibility.
Don't Steal
AI systems shouldn't violate copyright, trademark, or other intellectual property rights in their training or operation.
This is the hottest current battleground in AI regulation. It means AI companies need to comply with laws before scraping copyrighted content from the internet to train their models. Artists, writers, musicians, and other creators have rights in their intellectual property. And the laws and courts aren’t quite sure what to do with IP laws around AI and content. I won’t bore you with explaining legal principles like ‘fair use’ but trust me when I say, there are a lot of great minds working on it.
The obvious issue with ‘stealing IP’ is if AI systems reproduce copyrighted works too closely in their outputs. This is usually a bug not a feature, but has caused some big lawsuits like with NY Times suing OpenAI.
Ask for Permission
Meaningful, transparent and informed consent for AI training and deployment, especially for sensitive applications.
This means more than clicking "I agree" on a terms-of-service document nobody reads (please please read it!) For high-impact AI applications, people should understand what they're agreeing to and have real choice about participation.
This is especially important for AI systems trained on personal data or deployed in sensitive contexts like healthcare, education, or employment.
Using patient medical records to train AI diagnostic tools requires clear, informed consent, not just a buried clause in hospital admission forms.
Give Options
People should have meaningful alternatives to AI-driven decisions and some control over how AI systems affect them.
This means you can't force people into AI-only interactions for essential services. There should be human alternatives for important decisions. People should be able to opt out of AI processing when feasible. Some of this isn’t mandated by laws yet, but is something companies have instituted based on what customers have come to expect.
It also means providing tools for people to understand and, when appropriate, correct AI decisions about them.
Why This Approach Works
These principles work because AI is ultimately about relationships. Between companies and customers, between governments and citizens, between technology and humanity.
The best AI regulation won't come from trying to predict every possible technological development or micromanaging technical specifications. It will come from clearly articulating the human values we want AI systems to respect, then holding companies accountable for upholding those values regardless of how their technology evolves.
Your grandmother (and mine) was right. Honesty, respect, and taking responsibility for your actions never go out of style, even in the age of artificial intelligence.
Hey! If you enjoyed reading this, please consider becoming a paid member (its $5 for the next 50 subscribers - forever.) Thanks!!!