Posted Monday, October 16, 2023 by Team Northwoods

Artificial Intelligence and ChatGPT for Human Services and Social Work: Dos and Don’ts

Artificial intelligence (AI) is rapidly evolving and changing the world as we know it, creating a paradigm shift in multiple sectors, including human services and social work. However, AI tools also pose unique challenges, particularly related to data privacy and trust in their outputs.

Early conversations about AI focused largely on decision-making and understanding how these tools could digest large amounts of information to surface critical insight that workers needed to support their decisions and actions. Think natural language processing (NLP), machine learning, deep learning, analytics … many agencies and programs, especially child welfare, have been piloting these types of AI for a while now. Recently, chatbots and generative AI tools powered by large language models (think ChatGPT or Bard) have begun to offer promising capabilities, from enhancing client communication to managing case details and streamlining manual tasks. Means-tested eligibility programs are also beginning to experiment with these newer capabilities that AI tools have to offer.

The Northwoods team has spent years learning alongside agencies to understand how NLP and machine learning can be used in casework. As part of our ongoing innovation efforts, we’ve also spent the past few months researching and testing potential applications for tools like ChatGPT. And, we’ve attended sessions at various conferences to learn more about the possibilities of chatbots and generative AI. 

Divider in a Northwoods blog on artificial intelligence for human services

Keep reading for an overview of emerging use cases and best practices to help agencies use these tools responsibly and effectively. But first, some considerations:

Because AI is still new and continuously evolving, many states and agencies are still researching, exploring potential use cases, and writing their policies around acceptable use. You should always consult those policies and guidelines before testing anything on your own.

Additionally, remember that AI and other technologies should be viewed as augmenting and advancing human capabilities, like identifying patterns and optimizing trends, rather than replacing them. This allows the professional to make those final, critical decisions. A report by the World Economic Forum says, “While machines with AI will replace about 85 million jobs in 2025, about 97 million jobs will be made available in the same year thanks to AI. So, the big question is: how can humans work with AI instead of being replaced by it? That should be our focus.”

Lastly, keep in mind that publicly available tools like ChatGPT have taken steps to allow users to better manage and protect their data. However, OpenAI still states in their Privacy Policy that the information you share, as well as personal information they receive automatically when someone interacts with the tool, can be used to train its model and potentially be included in responses to other users who should not have access to that information. This means that a tool like ChatGPT does not provide the proper safety and assurance that the data you entered is protected under privacy regulations like HIPAA.

Divider in a Northwoods blog on artificial intelligence for human servicesUsing AI to Streamline Operations and Solve Business Problems

As AI models continue to advance, so do their capabilities. We’ve been testing and learning about a lot of new use cases that demonstrate just how many opportunities human services agencies now have to put AI to work on their behalf. Here are just a few examples:

  • Using ChatGPT to write notices that use plain language, so clients better understand them.
  • Leveraging a chatbot to support call centers by answering simple, routine questions that don’t require a human’s attention or analysis.
  • Using assistive AI to triage non-emergency 911 calls, detect and translate languages for transcribed calls, or to complete intake forms.
  • Using generative AI to review closed cases for quality assurance and reporting.
  • Using AI to flag workers that have higher than average denial rates (often referred to as “ineligibility workers”).
  • Using bots to process backlogs, identify changes (example: updating a person’s Medicaid case with a new address if they change it in SNAP), and flag cases that need a worker’s review. (Related resource: Putting the 'Human' Back in Human Services Through Robotic Process Automation)
  • Using ChatGPT to assist with writing high-quality case notes or developing a blueprint for goal setting and treatment plans.
  • Using ChatGPT to brainstorm concepts for a community awareness campaign (example: “Come up with five campaigns to help county human services agencies recruit more child welfare workers”) or write business-related documents (example: “Help me write a business justification explaining why a local human services agency with high turnover needs to invest in technology to support caseworkers.”)

Divider in a Northwoods blog on artificial intelligence for human services

Using AI to Support Decision-Making

Of course, no discussion of AI is complete without diving into its role in decision-making. AI tools are great for helping you sort through large amounts of information, make informed decisions, and facilitate your work. However, many industry leaders are nervous about their workforce relying on AI tools to make decisions that shouldn’t be made by a computer, especially in human services.

The rest of this post will dive into a couple of dos and don’ts of using AI tools to help support you when making important decisions.

  1. Don’t blindly accept the answers provided by an AI tool; do your own fact checking or research to verify your results.
  2. Don’t rely on AI to make your decisions; do use the information you're presented to kickstart the process and provide recommendations.

Divider in a Northwoods blog on artificial intelligence for human services

IMPORTANT: DON'T Blindly Accept the Answers Provided by an AI Tool

While the AI technology of today is truly incredible, it’s not magic, nor is it perfect. AI tools are not infallible due to limitations in their data or misinterpretation of your queries.

At the end of the day, AI tools are computers which means they think like computers. As users, we must be very careful with how we word our questions because computers think very literally, sometimes too much so. They often don’t understand what we really mean and will take your questions literally.

For example, a user once asked ChatGPT to make a list of all the countries in the world starting with the letter “O”. The user expected the chatbot to respond with just “Oman.” However, ChatGPT responded with “Oman, Pakistan, Palau, Panama...” because it interpreted the command differently. It still started with the letter “O” … but also included every other letter of the alphabet that follows it.

Now, imagine asking a similar chatbot for advice on what you should do about a case. If it does not interpret what you are asking correctly, or does not fully understand the client’s unique situation, then the answers it gives could not only be wrong, but they could also be dangerous.

Divider in a Northwoods blog on artificial intelligence for human services

IMPORTANT: DO Your Own Fact Checking or Research to Verify Your Results

Even if you’re careful to provide an AI tool with relevant context or data to help it understand what you need, you should still do your own fact checking to verify the results are accurate. As we like to tell our customers, “Trust, but verify.”

For example, you could consult with ChatGPT to better understand a law or policy in your state or jurisdiction, as long as you also ask it to tell you where you can find more information about its response, such as a specific paragraph or clause that it is referring to. Then you can do a quick Google search to confirm the results. This will help save a lot of time because you don’t have to sort through the legal documentation yourself, but you’re also not putting yourself at risk of getting the details wrong.

You could also use ChatGPT to brainstorm: what community resources are available for a family that you haven’t considered before? What medical devices could help an older adult safely navigate their home? What’s the best care setting for an individual with certain circumstances? With each question, some of the answers likely won’t apply to your client’s specific needs, so it’s up to you to do some additional research to determine what can be filtered out.

With any AI tool that supports decision-making, it’s critical that you can quickly and easily access the full source content that the tool is reading and analyzing to inform its results. This ensures that you have all the context and additional details needed to verify the tool’s findings are accurate.

Divider in a Northwoods blog on artificial intelligence for human services

IMPORTANT: DON'T Rely on AI to Make Your Decisions

Becoming a licensed social worker takes several years and hundreds of practice hours for a reason. The lengthy process of obtaining this licensure gives workers the knowledge and ability to arrive at the best decisions for their clients.

Jobs with a strong human element will be difficult for technology to replace. The skills social workers learn through years of direct practice are very hard to replicate with a computer. Skills like empathy, ethics, avoiding bias, and cultural competence are all things that a computer has a very difficult time replicating.

When a chatbot responds to a question, we aren’t quite sure what evidence it used to arrive at that conclusion. We don’t know if it followed the best practices, policies, and regulations required for making good decisions for clients, or if it used current, accurate data to guide its decision. This can lead to the AI tools perpetuating past biases, using bad reasoning, and not taking important context into consideration.

Remember, you are the qualified professional. Relying on AI to replicate your expertise can lead to a lot of issues. At the end of the day, humans are still critical in taking all the facts and data and making the right decision for that situation. Since you are still making the decision, you are also responsible for the outcomes.

Divider in a Northwoods blog on artificial intelligence for human services

IMPORTANT: DO Use the Information You’re Presented to Kickstart the Decision-Making Process and Provide Recommendations

AI tools may not be capable of making decisions for you, but they can be invaluable in providing suggestions or potential next steps to steer you in the right direction.

There’s a great example of how a licensed professional can use a tool like ChatGPT, alongside their professional expertise, to explore a wider range of possible decisions and come to the right conclusions. It starts with a dog who had an illness that his veterinarians couldn’t seem to solve.

The dog’s vet conducted all sorts of blood tests and couldn’t arrive at a conclusion on what was causing the infection, so the dog’s owner provided the test results to ChatGPT and asked it for recommendations of what the issue could be. He shared the results with the vet who read the suggestions for what the infection could be as well as the reasoning behind them.

The vet ruled out a few suggestions immediately that he knew weren’t correct given his education and experience. However, a couple suggestions from ChatGPT stood out and seemed plausible. He conducted testing for those suggested infections. The results came back and, sure enough, one turned out to be the case. They were able to treat the infection and the dog made a full recovery!

How does this translate to human services? Here’s one example: an AI tool can comb through hundreds of documents and pull out a list of people mentioned in the case that could be contacted to help support a child. However, your expertise as a social worker and your familiarity with the case are still needed to rule out options that are not in the best interest of the child based on the family’s unique circumstances. That way you don’t have to spend hours sorting through documents to make the list, but you are still responsible for analyzing the information and using it to inform you of your next steps.

Divider in a Northwoods blog on artificial intelligence for human services

ChatGPT and AI: Optimistically Proceed with Caution

While AI tools can bring numerous benefits to the realm of human services and social work, they need to be used wisely, with a focus on data privacy, responsible decision-making, and trust in their outputs.

A broadly available tool like ChatGPT may seem like a quick and easy-to-use option for exploring AI, but it’s important to remember that vendors with decades of industry experience (like Northwoods!) have also built their own tools through a social services lens. These tools are designed with social workers in mind and have learned from child welfare policy, agency audits, Child & Family Services Reviews, and more to support evidence-based best practices. They’ve also built in advanced privacy features to safeguard sensitive client information, such as encrypting data and storing it in a secure database.

No matter what tool you leverage, adhering to the best practices discussed in this blog post will ensure you use them in a way that complements your professional capabilities, rather than compromising them.

Divider in a Northwoods blog on artificial intelligence for human services

Want to learn more about the implications of artificial intelligence and ChatGPT for human services and social work? Here are some industry resources we recommend:

Divider in a Northwoods blog on artificial intelligence for human services

Lead Product Managers Mark Ruf and Lindsey Goodman, Director of Product Management Lauren Hirka, Industry Advocate Brittany Traylor, Chief Marketing Officer Brian LaMee, and Customer Success Managers Lindsay Drerup and John Irvin Hauser contributed to this post.

New call-to-action