Artificial Intelligence (AI) Policy

Policy Owner
Anders Hofstee

Policy approved

July 2025

Date policy review

December 2005

1. Introduction

1.1. Preamble

Catalpa exists to make public services work better for people. We recognize that artificial intelligence presents both tremendous opportunities and significant responsibilities. AI has the potential to enhance our ability to serve communities, improve program delivery, and strengthen the systems we work with.

This AI policy has been created to ensure that our use of AI aligns with our fundamental commitment to putting people first. We aim to improve efficiency in our work while maintaining the human-centered approach that defines our methodology, maintain compliance with all applicable regulatory frameworks and donor requirements, and foster continuous learning and adaptation as we integrate AI tools responsibly into our ways of working. Ultimately, we are committed to using AI as a tool that amplifies our human capabilities rather than replacing the judgment, creativity, and collaborative relationships that are central to effective development work.

AI will be integrated into most of the tools we use in our daily work. However, it is important to note that AI is viewed as an assistant to enhance our efficiency and the quality of our work. It is not a replacement for our thought, judgment, or creativity.

Ultimately, responsibility for all content produced with AI lies with individual staff members.

1.2. Statement of Commitment

We are committed to using AI in a way that aligns with our core values and ethical standards. Furthermore, we will ensure that AI tools are used to benefit our work and the communities we serve, while adhering to all national regulations, international standards, and donor-specific AI policies that may apply to our programs and operations.

  • Donor and Program Contract Compliance:

All AI use must comply with any AI use policies that are obligated by donor and program contracts. Accordingly, staff must review and adhere to specific AI restrictions or requirements outlined in individual donor agreements and contracts, program-specific AI policies and guidelines, and grant terms and conditions related to technology use.

  • National and International Regulatory Compliance:

Our AI use must comply with national regulations governing AI use in the jurisdictions where we operate, donor policies on the ethical use of AI, international standards and frameworks for responsible AI development and deployment, and data protection and privacy laws applicable to our operations.

1.3. Policy review and Ratification

As AI is developing quickly, this policy will be reviewed every 6 months to ensure it remains current and effective. During these reviews, any updates or changes will be reviewed and approved by the leadership team.

1.4 Policy Linkages

This AI policy is linked to our data privacy and data storage policies as well as our safeguarding policies. These documents should be consulted together to ensure a comprehensive understanding and application. Additionally, this policy references and aligns with relevant international frameworks and standards, for example the UNESCO Recommendation on the Ethics of Artificial Intelligence.

2. Purpose and Scope

2.1. Purpose

The purpose of this policy is to outline how we engage with AI in our work. Specifically, it defines the ways we use AI, the tools we approve, what is allowed and not allowed, and the process for gaining approval for AI tools. Additionally, it ensures that our use of AI aligns with ethical standards, protects data privacy, and maintains compliance with all applicable regulatory and contractual obligations. We want to encourage innovative approaches to our work through the use of AI whilst providing guidance on best practice and risks.

2.2. Scope

This policy applies to all employees and contractors who use artificial intelligence tools and systems in their work with our organization. It comprehensively covers all AI-related activities, whether for internal operations or external projects, including the use of local AI models and locally-deployed AI systems.

3. Policy Principles

Our approach to AI is guided by four core principles:

  1. Do no harm: We commit to using AI in a manner that upholds our ethical standards and promotes fairness and transparency. We recognise that AI can reflect or reinforce existing biases and may present information from perspectives that marginalise certain groups, particularly in the contexts where we work. Knowing this, we will use it in a way that minimizes bias as much as possible.

  2. Privacy: We commit to using AI safely and securely. We prioritize data privacy by using AI tools in compliance with our data privacy policies to protect sensitive information, including personally identifiable information (PII).

  3. Beneficiary first: We focus on appropriate use by deploying AI to support and enhance our work, with particular emphasis on efficiency and quality improvement, in alignment with the Principles for Digital Development. AI will be used where it’s the most appropriate solution for a problem.

  4. Accountability: We maintain individual responsibility, ensuring that each person is accountable for their use of AI and that it aligns with our principles, privacy and data storage policies, and best practices. We also ensure comprehensive compliance with all applicable donor requirements, national regulations, and relevant international standards.

4. How we use AI

4.1. What is allowed

As long as we ensure compliance with all ethical guidelines and best practices, AI tools can be utilized in various ways. For instance, tools like ChatGPT and Claude can be used for research, ideation, and brainstorming. In software engineering contexts, AI tools such as GitHub Copilot, and Cursor AI can enhance coding efficiency.

Additionally, the generation of illustrations in and tools like Figma, Claude, and ChatGPT can be employed for design projects, provided that we ensure their use aligns with our ethical standards and policies.

All AI generated material (images, code, prose, etc) must be checked by the responsible human for relevance, accuracy, and alignment with Catalpa’s quality standards before dissemination. In other words, a human must always be in the loop.

We recommend these tools but others can be utilised as long as they align with our policy

Some examples of allowed tasks are:

  • Aid to research and analysis of large amounts of information

  • Creating drafts outlines for documents

  • Aid to language translation

  • Code completion and syntax suggestions

  • Creating wireframe layouts

4.2. What is not allowed

There are several critical areas where AI use is prohibited or restricted:

  • Primarily, AI should not be used to process or analyze personally identifiable information (PII) – e.g. spreadsheets with names and other PII – without explicit authorization by the Project Lead or Sector Lead. Robust privacy safeguards must be put in place and all use must comply with relevant safeguarding policies. This refers to both internal Catalpa information as well as external project related information.

  • We absolutely prohibit the use of AI in ways that could mislead or deceive stakeholders, including the creation of fake content, misinformation, or impersonation of others.

  • When using AI, we must ensure that the content we input is authorized for use and does not compromise the privacy or safety of individuals. We also require that the terms of service of AI systems respect the intellectual property of Catalpa and our partners, protect individual privacy, and comply with all safeguarding policies.

  • Use of automated note takers (e.g. meeting transcription tools) must be disclosed to participants and comply with privacy and safeguarding policies. Discussions should not be recorded without prior approval.

  • Uploading sensitive or proprietary information (e.g. contracts, tenders, code, etc. ) to AI services that do not explicitly protect Catalpa’s Intellectual property for the training of models or sharing beyond people authorized by Catalpa. (see Which LLM is right for your privacy needs? tor examples)

  • Finally, AI should not be used to make final decisions without human oversight, especially in areas involving sensitive data or significant impact.

Catalpa staff should check with Operations for the latest information on what tools are allowed to be used and not allowed (see 4.4 for further information)

4.3. Local AI Models and Systems

The use of AI using local models or locally-deployed systems must also comply with our privacy and data storage policies, both as an organization and for individual programs. Importantly, local AI implementations are subject to the same ethical guidelines, data protection requirements, and approval processes as cloud-based AI services.

4.4. Approval process for Catalpa personel to use

Evaluate the AI tool first

Our approval process is designed to balance accessibility with appropriate oversight. If an AI tool is available and does not require payment or special access, staff may use it for evaluation purposes at their discretion, provided all guidelines are followed.

If you are using it to directly support your work

If a staff member intends to use an AI tool beyond evaluation—for example, in direct support of a work-related task—they should confirm with their team leader, manager, or Operations before progressing. The approval process must take into account the service’s security and privacy policies, such as those which comply with the ISO 27001 or GDPR standards.

We seek to encourage exploration, while also maintaining oversight. Any new AI tool should first go through a short evaluation period by the staff member. After this initial testing, if the staff member intends to use the AI tool beyond evaluation and directly on a work-related task, they should confirm with their team leader or manager.

If it is a paid tool

When a paid tool is required or if the AI use is sensitive or novel, the staff member must seek verbal approval from their team leader, who will then coordinate with operations to manage licensing and records if approved. The budget approval process for AI tools follows the same organizational procurement processes and approval hierarchies as other technology purchases within the organization.

Maintaining central records of AI tools used

Finally, Operations will maintain a record of AI tools in use, including an updated list of approved tools, cautionary use tools, and prohibited (“no-go”) tools. This list will be reviewed periodically by the leadership team and directors and shared with staff for their information and transparency 5. Implementation

5 Staff Development and Socialization

5.1. Training

As we integrate AI tools within the organization, there will be ongoing sharing, socialization and training opportunities for staff to ensure we can ethically adopt these tools and use them to increase our efficiency and collaboration. This comprehensive approach includes training sessions on ethical AI use, sharing sessions on new AI tools and best practices, collaborative learning opportunities to share experiences and insights, and continuous assessment of AI impact on our work processes and outcomes.

5.2. Monitoring

Regular monitoring and evaluation of AI use will be conducted to ensure compliance with this policy and to identify opportunities for improvement in our AI practices. This ongoing process includes periodic reviews of AI tool usage, assessment of outcomes, and updates to training and guidance materials as needed.

6. Roles and Responsibilities

6.1 Directors and Leadership Team

  • Provide overall oversight of AI use in the organization.

  • Approve updates to this policy every six months.

  • Review the list of approved, cautionary, and prohibited AI tools and share with staff on a regular basis

6.2. Operations Team

  • Maintain and update the record of AI tools in use.

  • Manage licensing and procurement for AI tools.

  • Ensure compliance with procurement and data protection processes.

6.3 Team Leaders and Managers

  • Approve or decline requests for work-related AI tool use.

  • Monitor appropriate use of AI within their teams.

  • Support staff in understanding and applying this policy.

6.4 All Staff

  • Follow this AI policy in their daily work.

  • Vet AI-generated content before dissemination.

  • Seek approval for new AI tools before operational use.

  • Disclose use of automated note-takers and protect sensitive information.

Last updated

Was this helpful?