The Single Source Regulations Office (SSRO) is committed to using AI responsibly to support our work. We recognise the opportunities AI offers to improve efficiency and quality, while ensuring safety, ethics, and human oversight remain paramount. This statement explains how we use AI, the safeguards we have in place, and our commitment to transparency.
Why we use AI
As part of our Digital Strategy, we are initiating use of Microsoft 365 Copilot – an AI-powered assistant integrated into Office applications such as Word, Excel, Outlook and Teams. Copilot can help staff with tasks like summarising documents, drafting text, generating tables or charts and performing preliminary analysis based on information they already have permission to access.
Our aim is to explore whether AI tools can save time and enhance the quality of our work without compromising compliance or accuracy.
Our principles
- Defined scope: AI is used only for internal tasks such as drafting documents, summarising reports and streamlining routine work. It is not used to perform regulatory decision‑making, to issue official guidance, or to make recommendations on contract pricing. Any content used externally or for formal decisions is authored and approved by SSRO staff.
- Security and privacy: Copilot operates within our secure Microsoft 365 cloud environment and respects pre-existing permissions. It cannot override access controls or connect to external data sources. We only use AI on information classified at the ‘OFFICIAL’ level, which covers the majority of our day-to-day work. We do not use AI on highly classified material or special category personal data. Copilot within our cloud environment cannot change access rights and cannot retrieve information a user does not already have permission to view.
- Compliance: Our approach aligns with MOD guidance (JSP 936: AI in Defence[1]) and UK government standards. We have completed a Technical Readiness Review (TRR) and updated our policies to govern AI use.
- Ethics: We follow the MOD’s AI ethical principles: fairness, accountability, transparency and reliability when using AI tools. We check for bias or inappropriate framing, ensure humans remain accountable for decisions and published outputs, are open about where AI has supported work, and keep use of AI constrained to safe, well defined internal productivity tasks. We will keep these principles and controls under review as guidance and best practice evolve.
- Human oversight: AI will only ever be used in conjunction with meaningful human involvement. All outputs generated by Copilot are reviewed and approved by SSRO staff before use. For example, if Copilot is used to help drafts a response to an enquiry, a staff member reads and edits it for accuracy and tone before sending.
- Transparency: We publish this statement to explain how AI is used by the SSRO and to reassure stakeholders that the exercise of the SSRO’s statutory functions always remain under and subject to human control.
Managing risks
We have assessed ethical and privacy risks and put in place clear controls:
- Data handling: All data processed by Copilot stays within our secure Microsoft 365 tenancy. No SSRO data is sent to external servers or used to train Microsoft’s AI models beyond our environment.
- Privacy compliance: Copilot does not introduce new data collection. It acts on information we already securely hold and falls under our existing Privacy Notice. Individuals’ rights under UK GDPR are unaffected. The SSRO does not take decisions based solely on automated processing; all decisions with legal or similarly significant effects involve meaningful human involvement.
- Usage guidelines: Staff must not input special category personal data or highly classified material into Copilot. Staff must only use Copilot with information they are permitted to access.
- Inaccurate or misleading outputs: we recognise that responsible AI use starts with the quality of the information and instructions provided to the tool. SSRO staff are expected to use clear, well scoped prompts, rely on accurate and up to date source material, and apply appropriate checks to ensure the prompt and supporting context are suitable. We are supporting this through training and practical guidance on using Copilot effectively, and staff remain responsible for ensuring both the inputs and outputs meet SSRO standards before any material is used.
- Classification: Classification remains entirely a human responsibility. AI does not assign sensitivity labels or determine classification. SSRO staff must apply sensitivity labels and ensure documents are correctly classified according to content before sharing or publishing.
- Accuracy and bias: Staff review all outputs for factual accuracy and tone. We acknowledge that AI can sometimes be incorrect or biased, so human oversight is essential.
- Copyright and legal duties: AI does not change our obligations under intellectual property law or public sector information regulations. Staff remain responsible for compliance.
Governance and oversight
We have established a governance framework to ensure responsible AI use:
- A dedicated working group monitors implementation and reviews risks.
- Our Responsible AI Senior Officer (RAISO) provides expert guidance on ethics and best practice.
- Oversight mechanisms ensure continuous monitoring and accountability, consistent with JSP 936 and wider government standards.
- We – and in particular our RAISO – will keep our use of AI, and this transparency statement, under constant review. If our tooling or material use of Microsoft 365 Copilot changes, we will update this statement accordingly. Our RAISO will ensure alignment with relevant government guidance, such as the Algorithmic Transparency Recording Standard.
Ongoing transparency and engagement
The SSRO will continue to be transparent about its use of AI. This public statement is part of that commitment. We will keep it up to date as our approach evolves.
We also welcome discussion on this topic. Stakeholders with questions or concerns about our use of AI are encouraged to contact us via helpdesk@ssro.gov.uk.
[1] Part 2 (ethical risk toolkit) is not public; we reference it without linking.