As a nonprofit evaluation consultant, the idea of using AI tools like chatbots and natural language processing can initially seem intimidating. My background in public health, rather than computer science, made the thought of engaging with complex technology daunting. However, my journey into AI has shown me that the real barriers are not just technical. They are rooted in things like imposter syndrome, nervousness, ethical considerations, and simply not knowing where to start.
Imposter Syndrome and Nervousness
I’ll be the first to admit that I felt over my head when I first considered using AI. There’s a pervasive myth that you need to be a tech guru to effectively use these tools. This myth can trigger imposter syndrome, making us feel that we don’t belong in the tech-savvy world of AI. But here’s the thing: these AI tools aren’t just for Silicon Valley engineers. They are designed for people like us – program managers, community organizers, social workers – experts in our fields, but not necessarily in technology.
When I finally decided to give an AI tool a try for an evaluation project, I discovered it wasn’t as complicated as I had feared. Yes, there was some learning to be done, but I saved time and had a better product in my first real attempt, now there is a clear benefit every time I use it! The user interfaces are intuitive, you don’t need to code anything, and you can get helpful results very quickly.
Ethical Considerations
Another significant barrier is the ethical considerations surrounding AI use. As nonprofit professionals, we are often deeply concerned about data privacy and the potential misuse of technology. These concerns are valid and should not be dismissed. However, many AI tools now offer robust privacy protections and allow us to use their capabilities without uploading sensitive data.
The key is to know where your boundary is, likely somewhere between not using AI at all and providing AI social security numbers. I’ve found a balance in using it for general tasks, not uploading any sensitive data, and disclosing my use to my collaborators. If you work for an organization, establishing standards for safe and secure AI usage and ensuring staff are aware and trained on these standards can mitigate many ethical concerns. By addressing these issues head-on, we can embrace AI’s benefits without compromising our ethical principles.
Not Knowing Where to Start
For many, the biggest hurdle is simply not knowing where to start. The world of AI can seem overwhelming, with more than 5,000 different tools and applications available. (Futurepedia.io, July 2024). It’s easy to feel paralyzed by the choices and unsure of how to begin. My advice? Start small, with a free plan using one of the major tools (ChatGPT, Claude, CoPilot or Gemini). Pick a task that you’re familiar with and experiment with an AI tool for that specific purpose. For example, use AI to help generate ideas for a fundraising campaign or draft an outline for a strategic plan.
When I first started, I committed to ‘chatting’ with the AI tool for just 15 minutes to see what kind of results I could get. This small, manageable step helped me build confidence and gradually expand my use of AI.
Empowerment Through AI
Now, I won’t pretend that I’ve become a total AI expert. There’s still a lot for me to learn. But I feel significantly more empowered and confident about incorporating these tools into my work. No longer do I see them as inaccessible. they’re just another handy resource that can make my job a little bit easier.
So, to my fellow nonprofit professionals, if you’ve been hesitant to experiment with AI, take it from me – you can do this. These tools are designed for people just like us. With a bit of guidance and a willingness to learn, you’ll be using them with ease in no time. AI can allow us to spend more time doing what we do best, applying our heart, passion, and dedication.
