To help us think through these vital—but often elusive—questions, the Humanitarian AI+MERL Working Group will be joined by Kristin Sandvik to talk about her recent research on the AI deframing of humanitarian knowledge. merltech.org/the-humanita...
@merltech
We care about responsible monitoring, evaluation, research, and learning (MERL) in the development, humanitarian, peace building and social sectors. Join our natural language processing (NLP-CoP) community of practice: https://merltech.org/nlp-cop/
To help us think through these vital—but often elusive—questions, the Humanitarian AI+MERL Working Group will be joined by Kristin Sandvik to talk about her recent research on the AI deframing of humanitarian knowledge. merltech.org/the-humanita...
Although it’s a new year, when it comes to AI, humanitarians will likely continue to grapple with many of the same questions. How do humanitarians prepare for the unknown of AI? How can humanitarians anticipate the ways that AI could trouble, or even remake, foundations of how humanitarians work?
For a refreshing view over alternative AI models, please read the latest report by the @merltech.bsky.social merltech.org/africa-ai-me...
The MERL Tech Initiative is pleased to launch our publication, “Made In Africa Artificial Intelligence Approaches In Monitoring, Evaluation, Research And Learning: A Practitioner Perspective And Landscape Study”, authored by Varaidzo Faith Magodo-Matimba.
merltech.org/africa-ai-me...
What could happen if we decenter large, resource-intensive, general-purpose AI models, which cause undeniable harm to the environment, the climate, and communities around the globe?
Later this month, we are hosting a 2-day course for those who want to explore using SLMs in their work. 👇
This is a hands-on opportunity for those who want to learn about how small language models, built in purposeful and controlled ways, and managed with thoughtful design, can be an alternative. merltech.org/small-is-bea...
What could happen if we decenter large, resource-intensive, general-purpose AI models, which cause undeniable harm to the environment, the climate, and communities around the globe?
Later this month, we are hosting a 2-day course for those who want to explore using SLMs in their work. 👇
This course is for professionals in social impact, international development, and humanitarian work who want to explore using SLMs in their work. Participants will gain an introduction to SLMs, discover how the concept of “open” manifests within SLMs, and learn how to set up an SLM for their use.
SLMs are built with fewer computational resources, they can lead to faster processing, increased possibilities on smaller devices, more privacy and security, and they can be tailored by you and your team.
Large, resource-intensive, general-purpose AI models are causing undeniable harm to the environment, the climate, and communities around the globe. Small language models, built in purposeful and controlled ways, and managed with thoughtful design, can be an alternative.
📢 We are hosting a course on Small Language Models (SLMs). Join us on October 30 and November 06 for a hands-on, 2-day workshop led by experts Cathy Richards and Jonas Norén.
merltech.org/small-is-bea...
The MERL Tech Initiative and Evaluation + Learning Consulting are partnering to offer a new virtual training on Data Analytics with AI on September 10 and 17, led by experts Liz DiLuzio, Zach Tilton and Linda Raftree.
Register in the following link to join: merltech.org/intermediate...
The NLP-CoP is hosting an event on August 21 to discuss how these AI directives may affect our work and what opportunities exist for advocates to raise concerns and fight for an approach to AI that is ethical and rooted in public well-being and sustainability. merltech.org/event-what-d...
The MERL Tech Initiative and Evaluation + Learning Consulting are partnering to offer a new virtual training on Data Analytics with AI on September 10 and 17, led by experts Liz DiLuzio, Zach Tilton and Linda Raftree.
Register in the following link to join: merltech.org/intermediate...
Current concerns about AI’s adverse climate impacts are largely confined to operational energy and water use by data centers. While these impacts are important, they represent only a fraction of the ways in which AI can influence climate outcomes, write Felippa Amanta and Charlie Wilson.
Make sure you join the Community of Practice for some much needed, critical conversations about responsible, ethical applications of NLP and Generative AI for MERL in development, humanitarian, human rights, peace building, and philanthropy work: merltech.org/nlp-cop/
On August 14, we host Mpho Primus (Institute for Artificial Intelligent Systems, University of Johannesburg) and Chido Dzinotyiwei (Vambo AI) for a conversation about African languages, linguistic complexity and ethical and inclusive AI. merltech.org/event-on-aug...
📢 On August 7, we're welcoming tech and human rights policy expert Eva Blum-Dumontet for a conversation about CHAYN’s process of building a feminist AI that embeds trauma-informed principles.
merltech.org/what-does-bu...
We're kicking off the month with a session focused on humanitarian accountability and AI use from a number of critical perspectives/several experts: merltech.org/join-the-hum...
August will be a busy month at the Natural Language Processing Community of Practice! 🧵https://merltech.org/events/
We're launching our AI and Climate Working Group on July 17 at 10 am ET!
Join us if you’d like to hear about and help shape our plans for the rest of the year, as well as discuss all the ways how climate and AI intersect. merltech.org/join-us-on-j...
What can we learn from emerging evidence of GenAI use in Social and Behaviour Change chatbots?
Our NLP-CoP brought together 6 organisations all using GenAI to deploy and evaluate SBC interventions. Here's what we've learned:
merltech.org/what-can-we-...
How are qualitative researchers navigating the use of AI tools by research participants? This month, the Ethics and Governance Working Group (EGWG) at the Natural Language Processing Community of Practice (NLP-CoP) is discussing the use of AI by research respondents.
merltech.org/qualitative-...
Technosolutionism, magical fixes, and lack of oversight: As one of our speakers noted during our event, “People in leadership positions are moving fast and breaking things, but when we’re talking about people’s lives, that isn’t a good thing.”
merltech.org/event-recap-...
Lastly, we were also featured in Humanitarian AI's podcast, discussing challenges associated with assessing the effectiveness of humanitarian aid activities and activities incorporating applications of artificial intelligence. merltech.org/podcast-asse...
Our new recently published report provides an overview of the use of AI in evaluation with a specific focus on democracy initiatives, and an accompanying policy brief. merltech.org/new-paper-ex...
We also shared this brief on Artificial intelligence in the humanitarian sector, in which we synthesize key information about AI’s varied applications for critical humanitarian decision-makers. merltech.org/new-brief-ar...
First, this podcast episode featuring Linda Raftree in the REvaluation Podcast. Focusing on the question of whether AI really saves work in evaluation, the talk is available in the following link: merltech.org/revaluation-...
🧵This week at The MERL Tech Initiative, we published a bunch of new resources! 👇