• About AIA
    • About AIA
    • AIA’s Executive Team
    • Agent Testimonials
  • News
    • Bail Bond Blog
    • Recent Bail Articles
  • Bail Bond Resources
    • Become a Bail Agent
    • Bail Bond FAQs
    • Bail Research Library
    • State by State Bail Directory
    • Upcoming State Association Meetings and Events
  • Other Bonds
  • Contact

Blog

You are here: Home / Pretrial Risk Assessments (SB1618): Oklahoma, Just Say “NO”

Pretrial Risk Assessments (SB1618): Oklahoma, Just Say “NO”

March 3, 2026Posted by adminin Blog, News

Pretrial Risk Assessments (SB1618): Oklahoma, Just Say “NO”

Here we go again, another state is falling for the false promise of risk assessments.  This time it is Oklahoma.  The first question you have to ask yourself is why?  Think about it, if progressive California of all places voted down risk assessments, why in the world would any other state, especially Oklahoma even consider them.  There is more than enough research out there not only questions their effectiveness, but more importantly points out their failings and propensity to be discriminatory.

Pretrial Risk Assessments: Tools of Discrimination Masquerading as Science

Risk AssessmentsIn recent years, pretrial risk assessments have proliferated across U.S. courtrooms as a purported fix for what bail reform proponents believe to be inequities in the criminal justice system. These algorithmic tools—such as the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), the Public Safety Assessment (PSA), and various state-specific instruments like Michigan’s Praxis—analyze factors including criminal history, age, employment, education, and neighborhood characteristics to generate a “risk score” for failure to appear in court or new criminal activity. In other words, they are trying to predict the future.  Proponents claim they replace subjective judicial discretion with objective, data-driven decisions, promoting fairness and reducing unnecessary pretrial detention.

Yet a growing body of evidence from civil rights organizations, academic researchers, and policy analyses demonstrates the opposite: pretrial risk assessments are both discriminatory and ineffective. They embed and amplify systemic biases while delivering predictions too unreliable to justify their use in deciding who spends weeks or months in jail before trial or who is released back into our community.


Built on Biased Data, They Reproduce Racial and Socioeconomic Disparities

The core problem begins with the inputs. These tools draw heavily from historical arrest records, prior convictions, and socioeconomic proxies—factors that reflect what some believe are decades of discriminatory policing, over-prosecution of minor offenses in low-income and minority communities, and unequal access to resources rather than any innate “risk” of an individual defendant.

Feeding this tainted data into algorithms simply automates and legitimizes the bias. As the University of Michigan’s Science, Technology, and Public Policy (STPP) program concluded in its 2023 policy brief: “Pretrial risk assessment tools replicate the racial and socioeconomic disparities that bail reform seeks to address.”

A landmark 2016 ProPublica investigation of COMPAS in Broward County, Florida, illustrated this vividly. Analyzing over 7,000 cases, researchers found Black defendants were nearly twice as likely as white defendants to be falsely flagged as high-risk for reoffending (44.9% false positive rate for Black defendants versus 23.5% for white defendants). Conversely, white defendants who did reoffend were more often misclassified as low-risk. Even though overall accuracy rates were similar across groups, the errors were racially skewed—over-predicting danger for Black people and under-predicting it for white people.

Civil rights groups have long warned of this. In 2018, more than 100 organizations—including the ACLU, Color of Change, and the Leadership Conference on Civil and Human Rights—issued a joint statement urging jurisdictions to reject these tools. They argued that the instruments “threaten to further intensify unwarranted discrepancies in the justice system and to provide a misleading and undeserved imprimatur of impartiality.”

Socioeconomic factors compound the issue. Many tools penalize unemployment, unstable housing, or residence in high-crime (often low-income) zip codes—variables that serve as stand-ins for poverty and race. In Michigan, for example, the Praxis tool explicitly scores unemployed individuals and non-caregivers higher risk, ignoring how structural barriers like job discrimination or childcare responsibilities drive these outcomes. The result: defendants of color and from poor communities receive inflated risk scores, leading to higher rates of detention or onerous conditions, even when their actual pretrial success rates are comparable.

Ineffective Predictions Masked by False Precision

Beyond bias, the tools simply do not work well at what they claim to do: accurately forecasting pretrial behavior.

Predictive validity is typically measured by the Area Under the Curve (AUC) statistic, where 0.5 equals random guessing and 1.0 is perfect prediction. Even the best pretrial tools hover around 0.60–0.70—meaning they are wrong 30–40% of the time and offer only modest improvement over chance. A 2019 MIT Media Lab analysis highlighted “serious technical flaws” undermining accuracy and validity. Tools use overly broad definitions of “flight risk” or “danger,” rely on flawed historical records that conflate arrests with actual criminality, and fail to distinguish individual risk levels meaningfully, especially for rare events like pretrial violence.

Base rates expose the overreach. Most defendants succeed pretrial: the vast majority appear in court and commit no new crimes. Yet tools dramatically overestimate risks. In one analysis of the PSA, 92% of defendants flagged as high risk for violence were never arrested for a violent offense pretrial. Those in the highest risk category still had only an 8% chance of violent rearrest within six months. Vague categorical labels—“high risk,” “moderate risk”—hide this uncertainty, leading judges to overestimate prevalence and err on the side of detention.

Failure-to-appear predictions fare no better. Tools ignore the many reasons why people miss court. The Pretrial Justice Institute summarizes the failure bluntly: risk assessments “label people as ‘risky’ even when their odds of success are high” and “fail to accurately predict whether someone will flee prosecution or commit a violent crime.”

Implementation studies reinforce ineffectiveness. When deployed, the tools show limited or counterproductive results. A multi-site quasi-experimental study found risk assessments associated with slightly higher rates of non-violent rearrests compared to standard practices. Different tools rarely agree on classifications; one 2006 comparison of five instruments found only 3% of defendants rated “high risk” across all of them.

A Veneer of Objectivity That Hinders Real Reform

By dressing biased guesses in scientific language, these tools provide false legitimacy to a system that desperately needs accuracy. The Pretrial Justice Institute’s position is clear: “RAIs are constructed from biased data, so the RAIs perpetuate racism. RAIs are not able to accurately predict whether someone will flee or commit a violent crime.” The University of Michigan memo concurs: “Risk assessment tools should play no role in pretrial administration.”

Pretrial detention already devastates lives—lost jobs, housing instability, family separation, and increased likelihood of conviction—disproportionately harming communities of color. Entrenching flawed algorithms only compounds the harm while delaying genuine solutions.

Criminal justice reform demands more than algorithmic window dressing. It requires accountability and support systems that enable court appearance. Until then, pretrial risk assessments remain what critics have long called them: discriminatory, ineffective, and a dangerous distraction from the work that actually makes communities safer and the system fairer

Tags: aia, aia surety, bail agent's association, bail bond, bail bond agent, Biased Data, eric granof, Failure to Appear, Free Release, Oklahoma, pretrial release, Pretrial Risk Assessments, pretrial truth, Risk Assessments
Share in social networks

Click here to cancel reply.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent
Popular
Understanding Bail Reform Research and Propaganda: Part 2
Understanding Bail Reform Research and Propaganda: Part 2

Understanding Bail Reform Research and Propaganda (Part 2 of 4):…

Understanding Bail Reform Research and Propaganda: Part 1
Understanding Bail Reform Research and Propaganda: Part 1

Understanding Bail Reform Research and Propaganda (Part 1 of 4):…

Understanding Bail Reform Research and Propaganda: Part 3
Understanding Bail Reform Research and Propaganda: Part 3

Understanding Bail Reform Research and Propaganda (Part 3 of 4):…

Pretrial Risk Assessments (SB1618): Oklahoma, Just Say “NO”
Pretrial Risk Assessments (SB1618): Oklahoma, Just Say “NO”

Pretrial Risk Assessments (SB1618): Oklahoma, Just Say “NO” Here we…

Bail Reform Advocates Finally Have Proof That Bail Reform Works in One Jurisdiction…Fantasyland
Bail Reform Advocates Finally Have Proof That Bail Reform Works in One Jurisdiction…Fantasyland

Bail Reform Success Bail Reform Advocates Finally Have Proof That…

Why Searching for the Truth on Bail Reform is Harder Than You think
Why Searching for the Truth on Bail Reform is Harder Than You think

Bail Reform Misinformation: Why Searching for the Truth on Bail…

Contact Us

1 Baxter Way, Suite 130, Westlake Village, CA 91362

(800) 935-2245

E-Mail: [email protected]

Twitter: https://twitter.com/BailInsights

Recent Blog Posts

Pretrial Risk Assessments (SB1618): Oklahoma, Just Say “NO”
Pretrial Risk Assessments (SB1618): Oklahoma, Just Say “NO”
March 03,2026 - 1:01 pm

© Copyright 2005-2026 AIA Surety All Rights Reserved | 800.935.2245

  • About AIA
  • Bail Resources
  • Become an Agent
  • Contact
  • Privacy Policy and Security
  • Accessibility
Skip to content
Open toolbar

Accessibility

  • Increase Text
  • Decrease Text
  • Grayscale
  • High Contrast
  • Negative Contrast
  • Light Background
  • Links Underline
  • Readable Font
  • Reset
  • Help