By Jana Eisenberg
The 2022 launch of ChatGPT is a reasonable marker for when many of us became aware of artificial intelligence tools. In reality, AI has been around for decades. Every GPS route. Streaming service recommendation. Spell-check. And so on.
Social work’s leading bodies, including the National Association of Social Workers, acknowledge the field’s need to ethically and effectively harness emerging technologies. The School of Social Work agrees.
Michael Lynch, clinical associate professor, and Alexander Rubin, clinical assistant professor, are helping frame the conversation about whether and how social workers should use AI.
Their recent research resulted in a presentation for social workers and educators about how this technology could help clients or enhance student learning in practicum settings. They are also the authors — with Todd Sage and Melanie Sage, clinical associate professor and adjunct instructor, respectively — of an AI guide for practicum educators.
Over the next academic year, Lynch, Rubin and Clinical Associate Professor Katie McClain-Meeder will present their findings at conferences for the Council on Social Work Education and the International Congress of Law and Mental Health. They also plan to continue their research by surveying social work professionals across the U.S.
Clinical Associate Professor Michael Lynch
Clinical Assistant Professor Alexander Rubin
Why should social work practitioners, educators, researchers and students consider technology as an element of their work?
Michael Lynch: Those involved in social work care about the good of society and need to be part of larger conversations around using technology — while advocating for human-centered approaches. With AI specifically, there are potential risks. If we, as educators and leaders, don’t talk about AI, it opens up the potential for some of those harms.
What have you seen in the academic landscape?
Alexander Rubin: The perception of AI differs depending on who you are. Mike and I are at the crossroads of practice and education, and we are curious about it. In higher education, many see students using AI to draft a paper as academic dishonesty, and it can run that risk. Students and younger people know that AI is here to stay. There are varying ideas about how (or whether) to address AI in the academic setting.
UB recognizes that AI is an emerging and powerful technology and is saying, “Let’s get curious about it” — this is evidenced by Empire AI, the Institute for Artificial Intelligence and Data Science, and other university initiatives.
What is the attitude at the School of Social Work?
ML: Attitudes among the faculty reflect the larger world: Some are leaning into it and using it, and some say it’s ruining the profession. So we formed an ad hoc committee to come up with some guidelines, and the framework we came up with is really good. People — especially educators — are hungry for things like that.
As part of your research and recent activities, you’ve conducted AI trainings and offered participatory presentations for other schools, practitioners and practicum educators.
AR: For one of them, we invited some of our social work partners who are active practicum trainers — they have student interns, they provide supervision, etc. We walk them through what AI use in social work practice could look like. How can it apply to student practicum learning? We laid the groundwork, dispelled certain myths: Here’s what it is, here’s what it is not.
ML: We touch on some ethical implications, including AI training models’ potential for bias. We provide case scenarios, where AI could help with tasks that don’t require a lot of critical thinking. For example, can it help you draft a social media post? Develop survey questions? Then participants form small groups and practice using the tool. It’s been well received.
We’ve mentioned the future, briefly and obviously. Where do you see generative AI going within social work?
AR: We live in a technological era; people use technology almost whether they want to or not. Generative AI is already giving us a new relationship with technology, so we might as well learn more about it. If the profession doesn’t understand it and establish guidelines and policies to help everyday social workers, one of two things can happen. One, you might have people who miss out on the opportunities it presents for social work. Or two, they start using generative AI in unsafe or unethical ways because they don't know how to do it. Somebody needs to fill that gap.
In one paper, Alexander Rubin, Michael Lynch, Todd Sage and Melanie Sage offer a guide for AI in social work practice, including sample policies, prompts and case studies. Here are four key recommendations:
1. Think critically. Question whether you should use AI in a situation based on social work ethics, client confidentiality and other concerns.
2. Analyze output. AI tools can streamline tasks and provide insights, but they can also generate biased or incorrect information. Carefully review all AI-generated content and adjust as needed.
3. Be transparent. Set guidelines for when and how AI may be used in your agency to reduce ambiguity and help staff feel more comfortable using these tools.
4. Provide training. Host workshops that allow people to test prompts, explore tools and discuss the benefits and limitations of AI in practice.