Artificial intelligence (AI) is rapidly transforming various sectors, and its impact is now being felt within the criminal justice system. Ryan Cameron, a human services professional and academic with extensive corrections experience, highlighted this in a recent article he wrote for the American Correctional Association’s Corrections Today magazine, arguing that AI has the potential to address some of the industry’s biggest challenges while also raising ethical AI concerns.
This blog post takes a deep dive into Cameron’s arguments in his recent article “Artificial Intelligence in Corrections: Balancing Security and Ethics” (Corrections Today, Fall 2024) and provides Fusion Health’s perspective on how to responsibly leverage AI in correctional settings.
Point: AI’s Potential Benefits in Correctional Facilities
Cameron argues that AI can significantly benefit correctional facilities in several ways, such as security. AI-powered surveillance systems can analyze vast amounts of data from video feeds, inmate communications, and other methods in real time to identify potential security threats like planned escapes, gang activity, or contraband smuggling. This proactive approach could prevent incidents before they occur, enhancing staff and inmate security.
Then there’s increased efficiency. AI can streamline jail, prison, or detention center operations by automating tasks like scheduling, inmate medication management, and even preliminary risk assessments. This can help alleviate the burden on officers, allowing them to focus on direct supervision, inmate interaction, and maintaining safety and security. This is especially valuable in the current landscape of correctional facilities, which are often plagued by staffing shortages.
It could even potentially help with recidivism. AI can be leveraged to examine inmate data and identify individuals who might benefit from specific correctional healthcare programs or interventions, including those struggling with substance abuse, behavioral health conditions, and other chronic illnesses that require ongoing management and treatment. This much-more-personalized approach could lead to more effective rehabilitation and reduce recidivism rates.
Counterpoint: AI’s Potential Pitfalls, Ethical and Otherwise
However, Cameron also cautions against the potential downsides of AI in corrections, emphasizing the need for responsible implementation. For example, you likely know that AI algorithms are “trained” on data. If the data that an AI is being trained on unknowingly reflects existing biases in the criminal justice system, the AI system—which doesn’t know any better—will perpetuate those biases, potentially leading to unfair treatment and disproportionate treatment among certain groups.
Here’s a real-world example. Some risk assessment tools used in corrections leverage historical crime data to predict the likelihood of an individual reoffending. However, if the data used to train these tools reflects existing biases in policing and arrests (such as possible over-policing of certain neighborhoods or racial profiling), the tool may predict a higher risk of reoffending for individuals from those groups—even if they aren’t actually more likely to commit crimes. This can lead to harsher sentences or stricter bail conditions for these individuals, perpetuating the existing biases in the system.
This problem is compounded because the decision-making processes of complex AI algorithms can be opaque and hard to understand, making it difficult for users to see why the AI made a specific decision or why an AI bias would exist. This lack of transparency can hinder accountability and make it challenging to address errors or AI biases.
Furthermore, Cameron argues that AI-powered surveillance systems raise serious privacy concerns for both inmates and staff. Inmates may feel like they are under even more constant surveillance, with their activities tracked and analyzed, leading to potential dehumanization for these individuals. This feeling could even extend to correctional staff—if workers feel like they are being constantly monitored, it could cause them to feel uneasy and suspicious, potentially impacting the ability to build positive relationships with inmates.
Fusion’s Take on AI in Corrections: Proceed with Caution
Our take? Fusion Health largely agrees with Cameron’s assessment of AI’s potential in corrections and believe it offers exciting possibilities for enhancing safety, efficiency, and rehabilitation efforts. However, we believe it’s crucial to proactively address any potential ethical concerns to ensure AI is used responsibly and equitably.
Here are some key considerations from our team:
- It is very important to ensure data used to train AI algorithms is accurate, representative, and free from biases. Corrections teams should regularly monitor and audit AI systems to identify and rectify any potential discriminatory outcomes.
- AI should be seen as a tool to assist correctional staff, not replace them. Human oversight is crucial to ensure that AI-driven decisions are fair and aligned with ethical guidelines.
- Correctional facilities should aim for transparency in their use of ethical AI. Inmates and staff should have a clear understanding of how AI systems are being used and how they impact their lives.
- Strict protocols should be in place to protect the privacy of inmates and staff. Data collection and usage should be limited to legitimate security and operational purposes, with clear guidelines on data retention and access.
Citation: Cameron, R. Artificial Intelligence in Corrections: Balancing Security and Ethics. Corrections Today, Fall 2024.