The use of generative AI in the workplace is growing at an astonishing rate. Two recent BBC articles highlight the challenges employers face in managing this shift. One article revealed that employees are secretly using AI tools without company approval because they find the sanctioned options inadequate or the approval process too slow. The other reported that international law firm Hill Dickinson restricted AI use after discovering that staff had used ChatGPT 32,000 times in a single week!
Both stories point to a common theme: AI is already embedded in the way employees work, whether employers like it or not. If organisations take a rigid, restrictive approach, they risk stifling innovation and efficiency. Instead of banning AI or insisting on a single ‘approved’ tool, businesses should adopt a more open-minded and collaborative strategy.
Listen to employees – AI isn’t a one-size-fits-all solution
Many employees turn to external AI tools because the available workplace solutions don’t meet their needs. A strict ‘use only this tool’ approach often fails to acknowledge that different teams may require different functionalities. A marketing team might need AI for content generation, while legal teams might use it for document review and summarising.
Rather than imposing AI solutions from the top down, employers should engage employees through workshops and feedback sessions to understand how AI is being used in different roles. This will help create policies that are practical and useful. Ask your employees for their recommendations and research them.
10 key elements to include in a Generative AI Policy
A clear Generative AI Policy is crucially important. Employers need to make sure AI is used responsibly and effectively. There are obvious risks of leaving AI use unchecked: the tendency of AI to hallucinate; the risk of data protection breaches and breaches of confidentiality.
Here are ten key points that we think should be included in any policy:
- Permitted use cases – Define acceptable uses of generative AI, such as summarising documents, drafting emails, or brainstorming ideas, while clarifying prohibited uses (e.g., generating legal advice).
- Transparency requirements – Employees should disclose when AI-generated content is used in work-related materials, particularly for client-facing communications.
- Confidentiality and data protection – Strict rules should prevent employees from inputting sensitive, proprietary, or personal data into AI tools.
- Process for approving use of different AI tools – Instead of restricting AI use altogether, provide a structured but efficient process for employees to request approval for new AI tools.
- Accuracy and human oversight – AI can hallucinate. The policy should include provisions requiring human oversight to maintain control. Employees should be required to verify AI-generated outputs for factual accuracy.
- Bias and ethical considerations – The policy should explain that AI models can produce biased or inappropriate content. Employees need to be aware of this and review any content with a critical eye.
- Intellectual property and copyright issue – Clarify whether AI-generated work can be considered original and whether there are any IP concerns related to its use in business materials.
- Monitoring and review – The policy should explain how AI use is monitored in the workplace (with a cross-reference to any monitoring communications policy). More generally, the policy should include an explanation of how frequently the policy will be reviewed.
- ESG and AI – The data centres which power AI produce large amounts of electronic waste. They also require a lot of water for cooling. All this is bad for the environment. Employers need to align AI-use with any wider ESG policy. Measures to be included in any policy could be a requirement to limit use of AI to situations where it is clearly needed; offering options other than Large Language models (which consume large amounts of energy). You could also require employees to disable imbedded desktop AI (such as Microsoft Co-pilot) if the tasks they are doing do not require its assistance.
- Accountability and consequences – Define who is responsible for AI-generated content. Outline potential consequences for misuse, with reference to your disciplinary policy if relevant.
Key takeaways
As with most technological advancements of the modern age, resistance is futile. AI adoption is inevitable. More than this, there are clear business advantages to (appropriately controlled) AI usage at work.
Resistance or over-restriction will only drive employees to find workarounds. Instead of banning or limiting AI use out of fear, organisations should take a proactive, informed, and employee-driven approach. From HR’s perspective, this starts with the creation and promotion of a clear Generative AI Usage Policy.
If you are a HR Inner Circle member, you can access a comprehensive template Generative AI Usage Policy in the templates and checklists section of the Vault: https://members.hrinnercircle.co.uk/the-vault/templates-and-checklists