Mako Logics

Resources / AI Acceptable Use Policy Template

AI Acceptable Use Policy template.

The written policy your people will actually follow and your auditor can point to. Covers ChatGPT, Microsoft Copilot, Claude, Gemini, and the specialized AI tools showing up across departments. Adapt to your compliance posture.

AI policy service →

Published April 21, 2026.

This template covers the majority of non-regulated Houston businesses. Healthcare, legal, financial, and DoD-adjacent companies need industry-specific clauses beyond this.

1. Purpose

This policy defines how [Company Name] employees, contractors, and consultants use artificial-intelligence tools in the course of their work, with the goal of enabling productivity while protecting client-confidential information, employee information, intellectual property, and regulatory compliance obligations.

2. Scope

This policy applies to all AI tools and services used for company business, whether accessed from company-owned or personal devices. This includes (but is not limited to): ChatGPT / OpenAI, Microsoft Copilot, Anthropic Claude, Google Gemini, Perplexity, image / video generation tools, coding assistants, and industry-specific AI.

3. Approved and restricted tools

3.1 Approved

  • [List approved tools by name — e.g., Microsoft 365 Copilot, ChatGPT Team / Enterprise, Claude for Work]
  • Approved tools are licensed at the business tier and have a Data Processing Agreement or equivalent on file

3.2 Restricted

  • Free / consumer tiers of any AI tool for work involving client or regulated data
  • Unapproved browser extensions that read page content and send it to third-party AI services
  • AI tools without a business-tier data-handling agreement

4. Data classification and handling

4.1 Do not submit to any AI (approved or not)

  • Client-confidential information covered by NDAs
  • Personal identifiable information (SSN, driver's license, date of birth of others)
  • Protected health information (PHI) — healthcare clients only
  • Payment card / financial account data
  • Controlled Unclassified Information (CUI) — DoD-adjacent clients only
  • Passwords, API keys, credentials, or secrets
  • Information under legal hold or active litigation

4.2 Approved with redaction

  • De-identified client examples (names removed, details abstracted)
  • Internal documents where proprietary details have been scrubbed
  • Code without embedded credentials or sensitive identifiers

4.3 Always approved

  • Publicly available information (research, marketing copy, public code)
  • Internal drafts of public-facing content
  • Generic productivity tasks (summarizing a meeting transcript that contains no sensitive data)

5. Work-product review

AI-generated output is a draft, not a finished product. All AI-assisted work must be reviewed by the responsible employee before it is shared with clients, used for decision-making, or published externally. Specific reviews required:

  • Legal, financial, or medical content — reviewed by the responsible licensed professional
  • Code — reviewed, tested, and credentialed before merge
  • Client-facing writing — edited for accuracy and voice
  • Decisions affecting hiring, compensation, or benefits — never made by AI alone

6. Attribution and intellectual property

  • Employees may not claim AI-generated content as original work when the client or employer is relying on it as such without disclosure
  • Where regulation requires disclosure (legal briefs in some jurisdictions, academic work), employees must disclose AI use
  • Company retains ownership of work product created with approved AI tools in the course of employment

7. Reporting and incidents

  • Employees must report suspected data leaks to AI systems within 24 hours to [IT / Security Coordinator]
  • Accidental exposure of regulated data through an unapproved AI tool is treated as a security incident under the company's Incident Response Plan
  • No retaliation for good-faith reporting

8. Training and attestation

  • All employees complete AI-use training during onboarding
  • Annual refresher training for all staff
  • Annual written attestation that the employee has reviewed and agrees to this policy

9. Review cycle

This policy is reviewed at least annually by the [Information Security Coordinator / AI Governance Lead] and when material changes occur in approved tools or regulatory obligations.

10. Sign-off

Adopted: [Date].
Approved by: [Name, Title].
Review cycle: Annual.