AI Policy
How we use AI at Habitus Collective
Introduction
At Habitus Collective, we are committed to being transparent about the tools and methods we use in our work, including artificial intelligence. Used well, AI allows us to work more efficiently and devote more of our time and energy to the thinking, relationships, and judgement that are at the heart of what we do. Our aim is to use AI as a tool that amplifies our capabilities without diminishing the human expertise and care we bring to every client and partner relationship.
AI supports our work. It does not replace our professional judgement, our expertise, or the personalised attention we bring to every piece of work we do. Your trust is essential to everything we do together, and that is why we are committed to being open about the tools and methods we use, including those driven by AI.
If you need more information, please contact: info [at] habituscollective.co.uk
-
We currently use Claude from Anthropic as our primary AI tool, on a professional plan. We use it to assist with structuring and editing written documents, organising complex information into clear and accessible formats, and summarising our own working notes and draft materials to support efficiency.
We use Read AI on occasion for recording and transcribing meetings and interviews, always with the explicit consent of participants.
-
All substantive work at Habitus Collective is carried out by our team. This includes designing research and evaluation approaches, developing data collection tools, conducting interviews and focus groups, analysing qualitative and quantitative data, identifying themes and patterns, interpreting findings in context, and drawing conclusions and recommendations. The thematic framing, theoretical grounding, and authorial judgements in everything we produce reflect our expertise, our values, and our direct engagement with the people and organisations we work with.
AI can serve as a useful starting point when we are developing ideas for methodological approaches, interview and focus group guides, project plans, and work structures. We treat everything AI generates in this context as a draft prompt for our own thinking rather than a finished product. All ideas, plans, and approaches are reviewed, challenged, and substantially edited by our team before they are used or shared. The professional judgement about what is appropriate, ethical, and fit for purpose always remains ours.
Much of our work involves producing written outputs including evaluation reports, research summaries, proposals, frameworks, and practice resources, often in collaboration with multiple contributors including peer researchers, practitioners, and people with lived experience. A significant part of the editorial process in these documents is bringing different writing styles and levels of writing experience into a single, coherent voice, without losing what people actually said or meant. AI assists us in that editorial process, in much the same way a writer might use a copy-editor, helping with flow, consistency, and clarity across what are genuinely multi-author documents.
We make a clear distinction between using AI to organise and summarise our own working notes and draft materials, and the synthesis and interpretation of primary data or research findings. Synthesising and interpreting primary data and research findings is substantive analytical work and always remains the work of our team.
All analytical content, thematic interpretation, and authorial judgements are the work of the named authors. No substantive content is generated by AI.
-
We do not share identifiable or sensitive data with any AI tool. Any data shared with Anthropic is not used to train or improve its models.
Where our work involves primary data from research or evaluation, including interview material, case studies, or participant contributions, AI is only ever involved once data has been appropriately anonymised and the substantive analysis completed, and only for editorial or organisational support.
-
We do not use AI to make or inform judgements about people, communities, or their experiences. Human expertise, relational knowledge, and critical reflection remain central to everything we produce.
AI tools are trained predominantly on mainstream, published, and institutionally validated data. This means they can reflect, reproduce, and sometimes amplify the very inequalities and exclusions that our work seeks to address. They are not equipped to hold the complexity, nuance, and situated knowledge that comes from lived experience, and they cannot replicate the relational and participatory processes through which that knowledge is generated and validated.
In our work, which centres communities who face both historical and ongoing marginalisation, structural exclusion, and the active dismissal of their knowledge and expertise. It is why we are deliberate and cautious about where and how AI plays any role.
-
Where AI has played a role in producing a piece of work, we are open about this. We welcome conversations with clients, commissioners, and partners about our approach at any stage of a project, and we actively invite discussion of any requirements or restrictions around AI use at the outset so that we can work within them from the start.
-
All work produced on our behalf must meet the same standards around AI use set out in this policy. When we work with associates, subcontractors, or collaborators, we are explicit with them about how we work and what this means in practice. This includes clear conversations at the outset of any working relationship about data confidentiality, transparency with clients, and the requirement that all substantive work, analysis, and judgement remains the work of the named person.
-
We do not use AI to replace professional judgement.
We do not use AI in ways that could compromise the confidentiality, dignity, or intellectual contribution of the people we work with and alongside.
We do not use AI to communicate with or on behalf of research participants, peer researchers, or people with lived experience.
We do not use AI to make or automate decisions about people, programmes, or recommendations.
We do not share or act on AI outputs without first checking them for accuracy, bias, and appropriateness.
-
This policy reflects our current practice and will be reviewed and updated as AI tools, sector guidance, and our own experience develop.
Last updated: 18 March 2026