Skip to main content

This is a new service – your feedback (opens in a new tab) will help us to improve it

SEGAS-00020 Use AI

Last updated: 20 March 2026
Relates to (tags): Generative AI, Artificial intelligence (AI), Secure development, Ways of working, Developed with assistance from Copilot

Teams should use AI where it improves developer productivity, code quality, accessibility, or service outcomes. AI can be used to support good engineering practices, such as writing documentation, increasing test coverage, refactoring legacy code, and addressing defects.

The Home Office has departmental policies and guidance on the use of AI and AI tooling. They set out the key responsibilities and guardrails for people working at the Home Office to follow.
This standard sets expectations to help engineers at the Home Office use AI coding assistants in safe, responsible and consistent ways. It makes sure that AI‑assisted work is done with human oversight and meets existing engineering, security and quality standards.

References to “teams” in this standard mean human teams responsible for the design, development, and operation of software; AI systems are treated as tools and are not considered to be team members.

Developed with assistance from Copilot


Requirement(s)

AI‑assisted outputs MUST be reviewed and approved by a human before reaching production

Teams must ensure any AI-assisted code or outputs are reviewed and approved by suitably qualified and experienced people. This must happen before code reaches production. Usually this would be done through a code review and testing.

Teams will retain full accountability for all AI‑assisted code and outputs. AI tools cannot replace human judgement, understanding, ownership, or responsibility for decisions, designs, or changes made to systems.

Teams need to be confident that they understand what they are running and can assert its security and maintainability.

AI‑assisted changes MUST be traceable through engineering processes

AI‑assisted changes must be marked as such, made visible, and be auditable through standard engineering practices (e.g. commits, pull requests, reviews). Teams should be able to demonstrate how AI‑generated outputs were reviewed, validated, and approved through existing processes.

An example of how this could be done:

In a commit message using a marker such as:

Add input validation for email field [AI-assisted]

And in a pull request description this states:

Parts of this change were generated with the assistance of an AI coding tool and have been reviewed and amended by the author.

You MUST test your code

All AI‑assisted changes must be tested in line with existing engineering standards such as the Developer Testing standard and the QAT principles before being merged or deployed. This ensures that AI‑generated code behaves as expected, does not introduce defects or regressions, and continues to meet quality, security and reliability expectations.

You MUST follow the guardrails

You must follow all approved organisational and technical guardrails when using AI tools. Guardrails exist to ensure AI is used safely, securely and responsibly, and to prevent outcomes such as data leakage, insecure code, biased or inappropriate outputs, or erosion of established engineering standards.

AI‑assisted work must comply with Home Office security policies, engineering standards, legal and ethical obligations, and any service or programme‑specific controls that apply. Where guardrails restrict or prohibit certain uses of AI, teams mustn’t bypass or weaken those controls or attempt to do so.

You MUST plan for the worst

You must plan for worst‑case outcomes when using AI tools. This includes assuming that AI‑generated outputs may be incorrect, insecure, biased, misleading, or inappropriate, and that failures may not be obvious at the point they are introduced.

Designs and delivery approaches should account for the impact of AI‑related failures, including how issues will be detected, mitigated, and recovered from. AI use shouldn’t create single points of failure, remove meaningful human oversight, or reduce the ability to respond effectively to defects, security incidents, or misuse.

You MUST ensure that sensitive, personal, classified, or otherwise restricted data is not exposed to AI tools unless explicitly approved

You must ensure that AI tools are used in accordance with Home Office data handling, security, and information assurance policies. Sensitive or restricted information must not be shared with AI tools unless explicit approval has been granted through appropriate governance and assurance processes.

AI‑assisted code MUST meet the same security expectations as human‑written code

All AI‑assisted code must be subject to the same security standards, reviews, and controls as code written without AI assistance. The use of AI shouldn’t reduce the level of security scrutiny applied to any change.

You MUST understand and manage the risks of AI‑introduced dependencies, libraries, or code patterns

You must assess and manage risks introduced by AI‑suggested dependencies, libraries, or implementation patterns in line with the Managing the security of software dependencies standard. This includes considering security vulnerabilities, licensing, maintainability, and alignment with existing architectural and engineering standards.

AI usage MUST evolve in line with updated guidance, standards, and organisational guardrails

Teams must regularly review and adapt their use of AI as guidance, standards, risks, and organisational guardrails change. AI usage should not remain static where expectations or controls have been updated.

You MUST only use approved tools

Teams must only use AI tools that are explicitly approved on the Technology Register by the organisation and configured in line with published guidance. Unapproved AI tools are not to be used for software development activities.


Content version permalink (GitHub) (opens in a new tab)