organization

Like businesses in virtually every other industry sector, mission-driven organizations are recognizing the power of artificial intelligence (AI) and how it might be harnessed to help them be more efficient and effective. It is equally important for them to recognize, however, that AI does not represent a panacea for addressing all of their issues. Like other technologies, AI has its limits, as well as the potential to be exploited for less than honorable purposes.

In practical terms, there is little debate that AI can be a tremendous asset to mission-driven organizations, which are often resource-constrained and must find ways to do more with less. Like their for-profit counterparts, such organizations can use AI to improve productivity, make better decisions and increase their impact on the communities they serve by automating repetitive tasks, analyzing large datasets, and identifying patterns and insights that humans may miss.

AI-powered marketing tools, for example, can help mission-driven organizations deliver more tailored and impactful messages to their target audiences. AI can be used to enhance social media strategies by automating scheduling, analyzing engagement metrics, identifying trends, and creating inclusive content. It can even optimize fundraising efforts and enhance donor and volunteer engagement by personalizing communications, predicting patterns of behavior and responding to inquiries.

Beyond such day-to-day business applications, though, AI is already being used by organizations around the world to implement programs for social good. Amnesty International, for example, recently employed AI to train human moderators how to identify and quantify online abuse against women. Makerere University’s AI research group partnered with Microsoft to establish an electronic agricultural marketplace in Uganda. Satellite imagery is helping to predict poverty in Darfur’s conflict zones.

According to a recent report by the McKinsey Global Institute, AI capabilities in the areas of computer vision, natural language processing, and structured deep learning are particularly applicable to a wide range of social issues. Such capabilities are highly effective in completing classification and prediction tasks, and in recognizing patterns in unstructured data – providing mission-driven organizations with key findings that can impact program structure and effectiveness.

AWS

The same report indicates that these AI capabilities could be especially impactful in addressing four social issues: Health care, where AI is already being used to meet such challenges such as early-stage diagnoses, optimized food distribution channels, and disease transmission prediction; education, in which both teacher productivity and student achievement could be improved by using AI-generated technologies such as adaptive learning; security and justice, where AI solutions can be applied to track patterns of criminal behavior and mitigate workplace bias; and equality and inclusion, in which AI can be employed to identify, reduce and eliminate bias based on race, sexual orientation, religion, citizenship and disabilities.

Clearly, there is great potential for AI to be a game-changer for mission-driven organizations. It is just as obvious, however, that there are specific risks in using AI. As with other technologies, AI has the potential to be exploited. Biases, for example, can be embedded in AI algorithms, leading to discriminatory outcomes such as excluding certain groups from receiving support or services. The ongoing conflict between data privacy and the need to protect sensitive personal data from being made public and the demand for making AI-generated decisions transparent could also inhibit use.

AI, at least as it currently exists, also has basic limitations. It cannot be used for work that relies on non-verbal observations, such as behavioral interviews. It does not understand underlying context or nuance and is unable to incorporate empathy.

With these risks and limitations in mind, mission-driven organizations employing AI are urged to create an ethical use policy for its use, which should commit to fairness, accountability, operational transparency, reducing bias and maintaining privacy. Such a policy should ensure that the organization clearly spells out how data used in AI applications is to be collected, stored and used; how AI decisions are to be made and communicated; and what procedures are in place for identifying and mitigating data bias.

The Organization for Economic Cooperation and Development (OECD) provides a good case in point. Shining a spotlight on the paramount importance of ethical AI use, OECD has put together guidelines for developing innovative and trustworthy AI. These Principles on AI state that AI should be driving inclusive growth and sustainable development; designed to respect the rule of law, human rights, democratic values, and diversity; transparent so people can understand AI outcomes; robust, safe and secure; and deployed with accountability so that organizations can be held responsible for AI systems they develop and use.

With proper precautions in place and a full understanding of both the benefits and risks associated with AI, it is clear that mission-driven organizations can work smarter, make better data-driven decisions and enhance their overall efficiency and effectiveness by using AI. It is important to remember, however, that AI is a tool – and nothing more – that can be leveraged by such organizations for social good. And while it can have a transformative impact, AI should never replace the human connection that is so essential to these organizations’ reason for being.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY