Posted Thursday, October 31, 2019 by Lauren Hirka

Don't Get Spooked by Artificial Intelligence for Child Welfare

Can artificial intelligence (AI) really help my caseworkers? Or is it all just a bunch of hocus pocus?

This is one of the most common questions we hear when introducing child welfare agencies to the concept of case discovery, which leverages AI. (Okay, maybe not the hocus pocus part … but come on, it’s Halloween!)

We interact with AI almost daily—think Siri, Alexa, or even your Roomba. It continues to gain traction in government and is seen as the emerging technology with the most potential by state chief information officers. Yet, the concept still seems scary in the context of child welfare.

We’re here to tell you it doesn’t have to be. Before you run amok (Amok, amok, amok!), just remember a couple of things.

Divider in a blog about AI for child welfare

AI can make use of the information and data you’re already collecting.

We often hear from workers who are frustrated when they’re provided with technology that’s just another tool for collecting and reporting on information and data. AI, on the other hand, can make that information and data useable and digestible in any place, at any time.

Here’s how we make it work: case discovery leverages a form of AI called machine learning to surface and analyze data that otherwise might have gone unnoticed (also known as “dark data”). It puts this critical information—analyzed through a child welfare lens—right at workers’ fingertips, with no need to re-collect or dig for data. It presents a complete picture of the child or family’s past and present to safeguard their future.

This means they have more time to focus on clinical interactions with children and families, plus they’re empowered with information to make more confident decisions.

Divider in a blog about AI for child welfare

AI will never replace human decision-making.

We hear a few similar questions and concerns from agencies: What if case discovery becomes a crutch for my caseworkers? What if it starts making decisions for my caseworkers?

In a recent post that debunks five myths about AI, Gartner notes: “Some forms of AI might give the impression of being clever, but it would be unrealistic to think that current AI is similar or equivalent to human intelligence.”  

We agree, and truly believe that regardless of how much we advance our work with data and technology, machines alone can’t answer the entire need. No machine can ever replace a human’s ability to understand the complexities of each child welfare case.

Technology can empower workers to discover elements of a case that might not have otherwise been found, but only a human can know what’s best for each child and family. More informed decisions are better decisions. That’s why we have one simple rule: trust, but verify.

Caseworkers must be able to apply their own training, observation, and critical thinking skills to understand how the information that technology can surface applies to each unique situation and case.

Divider in a blog about AI for child welfare

Don’t get spooked by the idea of artificial intelligence. Instead, get excited about the possibilities! Think about how your agency can leverage AI to make more informed, confident decisions. Of course, if you still have concerns or questions, we’re here to help.

Lauren Hirka, product manager, sets the long-term vision and strategy for Traverse, our content collection and case discovery solution, including the product roadmap, messaging, and communication with internal and external stakeholders. Lauren has spent hundreds of hours with child welfare professionals to research and develop Traverse from its inception. 

Divider in a blog about AI for child welfare

New call-to-action