Artificial intelligence is boldly finding its way into NGOs: their work, research projects, and educational initiatives. It increases work efficiency, reduces costs, and helps reach new audiences. But are we using it responsibly? Do we know what rights our users have? Are we clear about who is responsible if something goes wrong? Here are six questions that every organization—especially non-profits—should ask themselves before implementing AI solutions. Instead of a summary, we’ll share how to create an AI code of conduct for your NGO!
#1 Who’s responsible when AI makes a mistake?
At this moment in time, with no specific regulations concerning liability for damage caused by AI, general provisions apply, in particular those of the Civil Code, and so primarily fault-based liability. This means that a legal entity (natural person, legal person, or organizational unit with legal capacity) may be held civilly liable for damage caused by an act or omission related to an artificial intelligence (AI) system if fault can be attributed to it.
In the context of fault-based liability, an entity may face legal consequences if:
- The entity committed intentional misconduct, which occurs when it consciously and with direct or indirect intent violates legal norms or rules of social coexistence, leading to damage. In relation to AI, this could involve deliberately ignoring the identified high risks associated with the AI system’s operation and accepting the possible negative consequences, despite being fully aware of the system’s potential errors or harmful effects.
- The entity committed unintentional fault (negligence), which occurs when an entity fails to exercise due diligence required in a given situation, resulting in damage, even if it did not intend to cause it. In the context of AI, an unintentional fault could manifest itself in:
- failure to exercise due diligence during the design, development, or testing stages of an AI system, e.g., through the use of inappropriate training data or insufficient safety testing;
- inappropriate selection of an AI system for a specific task or its incorrect configuration;
- lack of adequate supervision of the AI system’s operation, including failure to monitor its functioning and respond to detected anomalies or errors;
- insufficient protection of the AI system against unauthorized access or manipulation.
Let’s not forget that not only “organizations,” but also individuals (i.e., the programmer, the AI systems operator, the user) are also liable.
Currently, Polish law, including case law and doctrine, is in the early stages of developing more specialized rules concerning the liability for the actions of artificial intelligence. This state of affairs entails a lack of full predictability as to future rulings. It’s worth noting that the legal discussion is opening up to the possibility of moving towards liability based on risk. This is a regime that simplifies the pursuit of claims for the injured party, as it does not require proof of fault on the part of the entity responsible for the AI data, for example.
#2 How to protect personal data in an NGO’s AI system?
When an AI system processes personal data, e.g., for the purpose of profiling beneficiaries, analyzing behavior, or personalizing services, the organization must comply with all GDPR requirements.
If you wish to ensure the safety of the data you’re processing, you should ask yourself these questions:
- What is my legal basis for processing this data? If it’s consent, am I sure I have a valid and informed consent from the user for the processing of their data by AI tools?
- Is the scope of data limited to a minimum, in accordance with the principle of minimization (i.e., we only process what is really necessary)?
- Did we check whether we needed to conduct a Data Protection Impact Assessment (DPIA)? This will often be required when using AI.
- How do we meet transparency and information requirements? Unless we anonymize data, we’ll have to inform people clearly, understandably, and in an easily accessible manner that we’re using AI to process their personal data.
And this can go on and on – I could dedicate a whole other article to data processing.
#3 Does the user know they are talking to AI?
Transparency is one of the fundamentals of the ethical use of artificial intelligence. According to the premise of the EU’s AI Act, organizations (which are providers) are required to clearly inform users that they are interacting with an AI system in situations where this is not obvious to the recipient.
Regardless of the stipulations of the AI Act, informing users that they are dealing with AI allows them to make more informed decisions and risk assessments. However, simply providing this information does not automatically exempt an organization from liability for any errors or harmful actions of the AI system itself; the organization’s legal position is better when it operates transparently.
So, yes. It is undoubtedly “better to inform” your users, because:
- This can weaken arguments about the organization’s guilt in potential civil disputes over compensation, as the organization has exercised due diligence in providing information.
- This builds trust and is consistent with ethical principles, which in the long term may also reduce the risk of legal disputes.
- This protects against allegations of unfair practices.
Although it’s important to remember that transparency about the use of AI alone is not an insurance policy for all liability. If an AI system is flawed, discriminatory, violates other rights (e.g., privacy in ways other than just a lack of information), or causes harm for different reasons, the organization may still be liable. Transparency, however, is a fundamental step toward responsible AI implementation.
#4 How to deal with biases in artificial intelligence algorithms?
Artificial intelligence “learns” based on the data that it receives. However, the issue is that the data often reflects biases existing in society, such as stereotypes, gender inequality, origin, age, or economic status. As a result, the AI system may not only replicate these patterns but even reinforce them, leading to discrimination or exclusion of certain groups.

Here’s a real-life example from an NGO: Let’s imagine that a non-profit uses AI to preselect candidates for a scholarship program designed to level the playing field when it comes to professional opportunities. Suppose the historical data used to train the algorithm had a disproportionate number of applications from men in big cities. In that case, the system may “learn” that they are the preferred type of candidate. As a result, AI may systematically rate equally valuable applications from women or people from smaller towns lower, perpetuating the inequalities that the organization wanted to combat.
For social organizations that use AI, e.g., in program admissions processes, beneficiary needs analysis, or support personalization, counteracting algorithmic bias is not only an ethical obligation but also a legal one. This obligation stems, among other things, from the provisions of the Polish Labor Code (for example, Article 18³ᵃ § 1 on equal treatment in employment) and general anti-discrimination regulations.
Moreover, the EU AI Act classifies systems used for recruitment purposes or for benefit assessment as high-risk systems. This imposes specific obligations on their providers and users, including in terms of data quality and human oversight, to minimize the risk of discrimination.
To minimize the risk of bias in algorithms, it is worth implementing the following in particular:
- The “human-in-the-loop” mechanism, meaning that you establish that within your organization, it is key that AI will never make final, fully-automated decisions on sensitive matters (e.g., rejection of an application, exclusion from a program),
- Testing results for fairness – however, it is not enough to check the data just once. The results of the algorithm should be tested periodically and verified to ensure that the indicators (e.g., percentage of positive decisions, average rating) are similar for different demographic groups.

Because technology is not neutral, but we can (and we should) try to make it fairer.
#5 Who owns the results of AI’s work?
Artificial intelligence generates texts, graphics, analyses, and other works (so-called outputs). However, the question of who owns the rights to this output is complex. Is it the organization? Is the employee writing the prompt? Or perhaps the tool provider? The answer is certainly not simple; it’s multi-layered.
Layer #1: Copyright – determined by human contribution
According to the Polish Act on Copyright and Related Rights, the protection resulting from it is granted exclusively to the human creator (moreover, this is an international rule). This means that if:
- The output was generated by AI without a meaningful and creative human contribution (e.g., after putting in a simple prompt), and it is not considered “a work” according to Polish law;
- A human makes a significant creative contribution, e.g., through deliberate, precise, and multi-stage formulation of prompts, selection, editing, compilation, and shaping the work’s final, one-of-a-kind product; they may be considered the creator (or co-creator) of the final work.
Layer #2: Protection of artificial intelligence output
However, the lack of copyright protection doesn’t always mean one is free to copy everything. There are other protective mechanisms, for example:
- Protection under personal rights: Polish Civil Code (Article 23) lists “artistic, scientific, or inventive work” as one of the individual rights of a human being. This means that even if a work’s copyright status is uncertain, the connection between the human and the result of their intellectual work is protected.
- Protection against unfair competition: in particular, if an organization has invested effort and resources in creating and promoting unique graphics, slogans, or an entire PR campaign (even with the help of AI), and another entity begins to use them in a way that misleads the audience as to their origin, this could be considered an act of unfair competition.
Layer #3: The contract with the provider – an AI tool’s terms and conditions
Even if Polish law allows us to take our creative contribution into account, what does the AI tool’s provider say? The Terms of Service are a contract that binds the organization. Many platforms, particularly in their free or cheaper versions, claim a license (permission) to use the output in specific ways.
Either way, it is always worth checking the terms and conditions of the AI tools we use within our organization, also in light of what I write about in the following paragraph.
Layer #4: Your data (input) – hidden risks
What happens to the content you put in a prompt (your input) is as important. If, when creating a report, employees of your NGO input sensitive data about beneficiaries or confidential organizational strategies into a publicly available AI tool, they must be aware that the provider’s terms and conditions may allow for the analysis and use of this data.
#6 Is regularly evaluating and auditing the performance of AI tools in non-profits necessary?
Implementing artificial intelligence does not end on the day it is introduced. Treating AI as a tool that you can “set up and forget about” is an easy path to losing control of it and, in the context of your organization’s work, a path leading to profound risk.
In addition to the arguments in favor of AI oversight I’ve already mentioned above, it’s also worth noting a phenomenon known as “model drift,” where a system that initially works perfectly becomes less and less accurate over time or begins to generate flawed results.
To sum up, at least a bit, it can be said that the lack of regular evaluation exposes an organization to three main types of risk:
- Operational risk, where ineffective or flawed AI performance can undermine project objectives and harm beneficiaries;
- Reputational risk: NGOs know, even more than other organizations, that a publicized system error can undermine trust in the organization in the eyes of donors and the public;
- Legal risk, where even undetected violations (e.g., discrimination, like the kind I wrote about in point 4) can lead to legal claims and financial penalties.
Instead of a summary: A set of AI rules for NGOs? It’s worth having one!
Lastly, I suggest concrete action: your organization should create an internal artificial intelligence code of conduct. Remember, it doesn’t need to be an extensive, formal document written using legal terminology. It should be the exact opposite – a practical guide and internal compass for the whole team, written in your words and tailored to your specific needs and operational scale.
This kind of code of conduct will organize your knowledge, minimize risk, and ensure that everyone in the organization operates using the same, informed principles. Below are areas that are worth including in it:
- Liability and oversight:
What is this chapter about? It’s about who is responsible for AI tools and what should be done when something goes wrong. - Data protection and privacy:
What is this chapter about? It’s about how we protect information—both that of the organization and our beneficiaries—when interacting with AI.
Transparency and fairness:
What is this chapter about? It is about honest communication and ensuring that our algorithms do not perpetuate harmful stereotypes. - Intellectual property and rules for using the tools:
What is this chapter about? It is about what we are allowed to create with AI (especially if an NGO has clients, it is worth considering whether they want to acquire copyright to your work), and who has the rights to do so. - Team competencies and auditing an organization’s use of AI:
What is this chapter about? About the fact that technology is only as good as the people who use it.
Keep in mind that your code of conduct can be created over time, with the help of experts (lawyers, ethics specialists), and even with the help of… AI itself can help us edit and organize our thoughts. Most importantly, it should be a living document that genuinely supports your mission in the digital age.