Now's a good time to have the conversation!

Now's a good time to have the conversation!

Does your organisation have an AI policy?

Mar 13, 2024

Illustration by Neema Iyer

A few months ago, we had a colleague who was ChatGPT-ing us to death. We raised the issue and they denied using AI in their email responses, but it was glaringly evident. Irene and I then discussed creating an internal AI policy. I wanted to open up the conversation to other organisations on how to create a flexible and responsive generative AI policy for nonprofits and small businesses. In this case, I will be mainly approaching the policy from the perspective of using generative AI. At this point in 2024, this is the main interaction that most non-profits will have with AI tools, though I may also touch upon automation, decision making, data analysis etc. Furthermore, instead of copy/pasting a template from online, I believe it will be useful for organisations to critically think about and develop these policies as it relates to their specific context.

First of all, why is AI being used at all?

For many, AI has become an assistant or a subservient partner, available to answer your questions, rephrase your text or draft your reports and communication. So, I’d broadly say that in the case of non-profits, it’s being used to increase efficiency by cutting down on time required to create written and visual content, whether it’s social media content, emails, funding applications, reports etc. 

We could divide these uses into the following categories:

  • Internal Communication (email, Slack, Teams, Whatsapp)

  • External Communication (social media, reports, proposals)

  • Product Development (research, videos, advocacy material)

  • Image Generation (for any of the above uses)

Now on the flip side, what are the outcomes you'd most want to avoid with AI implementation in your organisation? It could be things like damage to reputation because of sharing inaccurate or inappropriate content or data breaches by uploading sensitive data into public AI tools. If we start by understanding our why, it can help guide some of the following questions.

Let’s identify some of the key issues to consider when drafting this policy. 

My hope is that you can sit down with your colleagues and go through these questions and have a candid conversation on how you might approach these different issues.

  1. Safety and Security

    How will you ensure that your colleagues do not upload sensitive information to different AI platforms? This could be personal data of your staff in HR systems or personal data of the people you serve if you keep rosters or conduct research and evaluations. If you do plan to use AI systems for data analysis or writing sections of your research, will you inform participants before they provide the information? For example, in the case of note taking AI applications such as Otter, they make meetings more accessible and bring more clarity as well as include those who couldn't join due to childcare or timezone reasons. However, these benefits must be carefully balanced with the need for confidentiality and compliance with both internal and external data protection rules and regulations.

  2. Bias and Stereotypes

    How will you ensure that the imagery or content you use from GenAI platforms does not reinforce harmful stereotypes? What is your tolerance for disfigurement or errors in AI images produced? Are you OK with using AI images in your public facing content?

  3. Replacement of Humans

    What is your stance on cutting down the number of staff or staff hours and replacing their work with AI systems? How might you mitigate the anxiety that staff might have about their possible replacement?

  4. Quality and Accuracy of the Outputs

    How will you ensure that the content produced by AI systems is not rubbish, or hallucinatory in nature and that there are Quality Assurance systems in place to review the information produced for accuracy? Thinking about the African context, there are still many challenges with localisation of content or understanding local nuances when it comes to both written and visual content.

  5. Use of Trustworthy Products

    Which products do you or your colleagues use regularly? Is there a list of products that your team or IT department have signed off on that are safe to use by all staff?

  6. Violation of Intellectual Property

    What if the images or material utilised in your projects belong to someone else and are being used without proper attribution? What procedures could you put in place to verify the ownership and copyright status of content before it's used in your projects, especially when sourced from AI platforms. In the event of a copyright claim, how might you handle it?

  7. Automated Decision Making

    To what extent will you allow AI systems to make decisions for your organisation? Will you use it to filter through applicants when hiring or to make decisions on wages, promotions or dismals? What kind of ethical guidelines might you use if you do use automated decision making that affects human beings?

  8. Integrations

    What rules will you have that allow, prohibit or monitor how your colleagues connect or integrate AI systems to your existing software? What criteria will you use to determine whether a particular AI integration is a must-have for your organisation and leads you to furthering your mission and vision?

  9. Sustainability and Environmental Impacts

    Will you give any considerations to the environmental impacts and high energy usage of AI systems in your work?

  10. Disclosure

    If you use AI in your everyday work or on specific projects, will you disclose this information to your audiences, partners or donors? Would it be helpful to develop guidelines for how and when to disclose AI use in your outputs?

  11. Liability

    If something goes awry due to the use of AI in your work, how will you respond? For example, publishing inaccurate or inappropriate information or content.

  12. Compliance

    How will you ensure that all your colleagues or staff are on board with your AI policy? What will you do if you learn that a staff member has been in breach of the AI policy?

  13. Training

    What processes (and budget) do you have in place to train your colleagues and staff on all the above mentioned issues and the rapidly changing field of AI as it applies to your work and sector?

  14. Monitoring and Updating

    How will you ensure that your AI policy is working well AND keep abreast of developments in AI to continue updating and improving upon the policy? How will you communicate these changes to your colleagues and staff each time you make a change?

  15. When you’re just annoyed

    How will you respond to colleagues who use GenAI at times that you personally feel are inappropriate, such as interpersonal communication?

This is the list (for now). If you have any others that I should add, please send me a message here! Happy policy making.