The things people don’t understand about AI are miniscule compared to the things people don’t understand about ourselves when we use AI. Everyone in legal should spend some time this weekend with David Colarusso‘s lesson on “Algos, Bias, Due Process, & You” over at th Suffolk LIT Lab blog. (Link in comments). H/T to Sam Harden for bringing it to my attention. And further h/t to @cursedpingu.bsky.social on Bluesky for the Reddit screenshot.
John is absolutely right. Most debates about AI in law focus on whether the models are good enough. Far fewer ask whether we are disciplined enough when we use them. David Colarusso’s lesson is not really about broken algorithms. It’s about comfortable overreliance. The unsettling part isn’t that the system makes mistakes. It’s that people stop checking after the system has been right for a while. That’s where real legal risk begins. Not in hallucination. In quiet, time-pressured trust. And the fairness simulations surface something even deeper. Many “bias” disputes are not about bad actors. They’re about competing definitions of what good means. Blackstone gave us a ratio. Prediction tools force us to operationalize one. AI doesn’t replace values. It exposes them. And that may be the most uncomfortable feature of all.
Check 👏 Your 👏 Cites! 👏
https://suffolklitlab.org/algos-bias-due-process-you/