Responsible AI Statement

Our commitment to responsible AI

Our approach to AI is shaped by our purpose: putting people at the centre of care, making life easier for those who deliver it and improving outcomes for everyone involved. We recognise that responsible AI in social care must be grounded in the human rights of the people it affects - their dignity, autonomy, privacy and right to participate in decisions about their own care. This human rights foundation, alongside our commitment to care values, guides the choices we make about AI: we prioritise capabilities that reduce burden, support better-informed decisions and strengthen the relationship between carers and the people they support. We avoid AI that adds complexity, distracts from care or replaces professional judgement.

Responsible AI principles

Below are the principles that guide how we develop and use AI:

1. Accountability and governance

To make sure our AI systems are safe, well-managed and used appropriately, at PCS we’re committed to clear ownership, defined responsibilities and strong governance structures. We embed AI within our existing governance framework rather than treating it as a separate discipline. Our information security management system, certified to ISO 27001, guides how we identify, assess and treat AI-related risks. Decisions are documented and overseen by named owners, and we maintain human oversight wherever it matters. We prepare for emerging standards such as ISO 42001, which provides an internationally recognised, best-practice framework to manage and evidence responsible AI.

2. Transparency

Where AI is used in any systems or software, people must be made aware that AI is being used and how the AI contributes to decisions, what it does and how it fits in real workflows. At PCS, we explain how our AI features work, what role they play and where they appear in workflows. When AI contributes to insights or predictions, we make this visible so that teams understand its purpose and limitations. We aim for clarity over complexity and we will not overstate what AI can do.

3. Privacy and data protection

Data collection and processing are the foundation of our services so security and privacy are essential. We recognise and uphold the key principles of general data protection law in all aspects of our business, from data collection and processing to storing and safeguarding.
All data processing involving AI is assumed to be high risk so before we begin any processing with AI, we complete a data protection impact assessment. Our security controls minimise data exposure and ensure that personal information is handled responsibly throughout the lifecycle of AI features.

4. Fairness and reliability

We use a risk-based evaluation process before AI touches our systems or customer data. This includes reviewing dataset quality, assessing potential impacts and monitoring performance over time. We also assess where bias or discrimination could occur and we take steps to minimise this through data review, model testing and ongoing monitoring. To reduce the risk of bias, we will also conduct segmented validation, assessing any model performance across multiple cohorts to identify and mitigate disparities.

As PCS technology serves predominantly older adults, including people living with dementia and cognitive impairment, we pay particular attention to age-related bias in AI models, the risk of language or assumptions that do not respect the dignity and individuality of older people, and the potential for AI outputs to reflect societal ageism inherited from training data.

We also assess whether AI features adequately account for the diversity of the people who draw on care - including cultural, linguistic and cognitive diversity.

Predictive models, such as those related to deterioration or falls risk, are designed to support professional judgement, not replace it, so care teams remain in full control and are responsible as the decision makers. Our aim is to make sure AI behaves consistently, supports equitable outcomes and does not disadvantage any individual or group.

5. Sustainability and continuity

We recognise that care services that depend on AI must also be resilient to technology disruption. We design our AI features so they enhance care workflows without creating dependencies that would compromise care if the technology were unavailable. We support our customers in maintaining the capability to deliver safe, high-quality care with or without AI-assisted tools.

6. Human oversight and control

AI must support people, not replace their expertise. Our AI features are designed to assist care professionals, not make decisions on their behalf. Predictive intelligence, summaries and automated checks reduce burden and provide early insights, but they never remove professional judgement or responsibility.

In line with regulatory guidance, we ensure human oversight is meaningful. This means human reviewers actively check and interpret AI‑assisted recommendations rather than applying them automatically. Reviewers can challenge or override an AI output when their judgement indicates a different course of action, having weighed the recommendation. Our AI outputs are therefore reviewable, understandable and always secondary to the professional expertise of the care team.

7. Security and responsible use of third party AI

We apply the same level of scrutiny to third party AI tools as we do to any critical system. This includes initial and ongoing due diligence, technical assessment, risk analysis and contractual safeguards. No external AI service is allowed to process data unless it meets our standards for security, privacy and responsible operation.

Where we do plan to use LLMs as part of agentic and workflow AI, we will make sure an ontology layer is used, grounding the LLM by providing explicit semantic context. This structured context serves as a safeguard against common LLM risks (such as hallucination) by supporting the agent’s reasonings over verifiable and traceable relationships that exist only between the data that it has access to.

8. AI co-production and lived experience

We’re committed to involving people who draw on care and support, care workers and care providers in the design, development, testing and review of our AI features. We recognise that responsible AI can’t be defined by technology companies alone. We actively seek input from frontline users at every stage of the product development lifecycle, people with lived experience and sector partners to make sure our AI reflects the realities and values of the care environment. We will publish how this involvement shapes our AI development and governance decisions.

How we use AI in our products

Predictive and data driven AI

We plan to use data driven models to support earlier, more informed decision making. These models assist with:

    • predicting falls with injury and signs of deterioration

    • identifying trends and early warning patterns

These insights help care teams respond proactively, while keeping full control over decisions.

Agentic and workflow AI

Our product roadmap for AI development focuses on AI that performs useful work within real workflows. Examples include:

    • AI generated care plan summaries that staff can review and edit

    • automated background tasks that reduce repetitive administrative work

    • features designed to free up time so teams can spend more time with the people they support

We prioritise embedded, task-specific AI because it is faster, clearer and more effective in real care environments than generic conversational interfaces.

Use of conversational AI

We use conversational AI only where it provides genuine value, such as Rosie, our AI powered virtual assistant for customers seeking product support.

We deliberately don’t add generic chat interfaces to our products because this can be unhelpful and add friction for users. Instead, we focus on workflow-based chat interactions where the AI is aware of the context and can support powerful and seamless interactions.

What we will not do

    • We will not add AI for novelty or hype

    • We will not replace human judgement

    • We will not implement generic chatbot interfaces that reduce efficiency

    • We will not use unvetted or ungoverned third-party AI services

    • We will not use AI in ways that reduce the autonomy, dignity or agency of the people who draw on care and support

    • We will not deploy AI features that enable surveillance or monitoring without the informed consent and understanding of the individuals affected

Our focus is always on meaningful, workflow embedded AI that is demonstrably useful.

How we maintain Responsible AI in practice

Privacy and security first

We have robust technical and organisational controls in place as part of a layered approach to protecting personal and sensitive data. AI features are designed to minimise data use, reduce unnecessary exposure and safeguard information at all times.

Clear oversight

Ownership and accountability sit with specific roles within our governance structure. PCS has appointed a Data Protection Officer and a Caldicott Guardian. The Data Protection Officer is responsible for overseeing our compliance with data protection law, advising on our obligations, monitoring practices and acting as the main contact for regulators and individuals. The Caldicott Guardian responsible for protecting the confidentiality of health and care information about individuals, and makes sure this information is used ethically, legally and only when appropriate to support care. 
We review and monitor AI related decisions throughout the lifecycle of each product feature via AI system impact assessments. These consider the effect on individuals, groups of individuals, societies and universal human rights.

Open communication

We explain how AI is used, where it appears and how it informs key tasks. We make sure our customers understand what AI does and does not do, why it is there and how it supports the outcomes they rely on.

Continuous improvement

As in all sectors, AI in care evolves quickly. We monitor regulatory developments, new standards and sector best practice so our approach remains responsible and relevant. We’re committed to delivering AI that supports care teams, strengthens decision making and enhances the quality of life for the people they support.

Last updated: 14 April 2026