ao link
Affino
Search Teiss
My Account
Remember Login
My Account
Remember Login

The American View: AI Surveillance in Schools – Safety Net or Privacy Nightmare?

Linked InXFacebook

The Dallas Morning News ran an interesting article on 22nd March titled “Schools use AI to monitor kids, hoping to prevent violence. Report finds security risks.” The piece was a collaboration between Claire Bryan of the Seattle Times and Sharon Lurye of the Associated Press. The authors explored how school districts in Washington State, North Carolina and Oklahoma U.S.A. have implemented machine learning solution to monitor all of their students’ activity on their school issued equipment and across their networks to detect keywords that might indicate the student was suicidal, homicidal, bullied, mentally or emotionally troubled, or otherwise in need of intervention. 


It was a frank and worrisome look into the competing values and priorities of school administrators, parents, students, privacy advocates, and first responders as well as the difficulties managing a dangerously inexact spying program. Examples included botched intervention attempts that actively made things worse, including a school outing a closeted LGBTQ student to their homophobic parents. Several sources quoted in the article complained of false positives, leaks of sensitive information, misprioritization of risk indicators … Anecdotal evidence suggesting that tech solutions seem to deliver too little value for the price paid for them and the backlash created by their use. 


Bryan and Lurye’s article is a good summary of irreconcilable motivations of the proponents and opponents of electronic surveillance in school information systems. On the one hand, lawyers are terrified of being sued into oblivion for being caught violating students’ privacy, sensitive information, and destroying their trust in authority. On the other hand, lawyers are equally terrified of being sued into oblivion for having the means to detect and prevent a terrible event — like a school shooting or a suicide — and getting caught not using it, thereby tacitly becoming an accessory to the event. It’s a no-win situation; no matter what position a school district takes on the no-surveillance-ever to surveil-everything-always spectrum, they can always count on getting sued for doing it “wrong.” 

In American law, the scales held by “Lady Justice” stand for “demanded if you do” and “damned if you don’t.”

This probably sounds painfully familiar to you if you’re a cybersecurity professional. We’ve been holding these same arguments about when, why, and how to surveil our users for threat detection and intervention purposes since we first started networking computers in the 1990s. No one has ever found a “best practice” or a defensible standard that satisfies our community. It seems like every expert has a passionate argument for or against organisational surveillance and, gosh darn it, most of those arguments are compelling. 


I’d hazard a guess that many of your organisation’s supervisors and senior leaders are aware of the problem as well. The longer you’ve worked in the corporate world, the likelier it is that you’ve encountered issues with internal surveillance. You might have been dragged into a contentious disciplinary action or termination action where the admissibility of a user’s statements was argued to death by competing attorneys. Or you might have been tasked to clean up a “spill” of highly sensitive information that was gathered during an investigation. Or maybe you got stuck with a nearly impossible quest of hunting down and eliminating “shadow channels” on unapproved third-party sites that your users fled to in order to evade company surveillance. 


One of my least favourite memories of running a military IT department was having to argue with senior leaders that no, the government really is reading everything you type and has legitimate unlimited access to everything you’ve ever created on government information systems. Even after presenting the interested parties with exhaustingly detailed analysis and findings about United States vs Long (2006) and United States v. Larson (2008), supposedly rational officers still insisted that it wasn’t cricket to read a soldier’s self-incriminating emails and chats … no matter what the governing regulations said. [1]

“I CAN’T BE HELD ACCOUNTABLE; I’M THE MAIN CHARACTER!!” – Danged near every colonel I’ve ever met.

For most workers, the unpleasant truth of omnipresent monitoring is infuriating. No matter how many times you explain the law (or policy or whatever), many workers just won’t accept it. I made an enemy for life when I was ordered by my wing commander to seize and search a unit PC for evidence of a memorandum that a dirtbag criminal had claimed had been written by the vice wing commander that would let said dirtbag criminal get off scot-free. It didn’t matter that we in IT were confident that the “exonerating memo” was a forgery; we were directed to investigate and report our findings as ordered. The impacted colonel became an implacable enemy of both me personally and the IT squadron in general for the rest of his time in uniform. Emotion usually trumps regulations when the chips are down.


No wonder then that organisations are hesitant to use the power and access that they already have to actively search for indicators of potentially disruptive or dangerous behaviour. So long as your outfit never experiences a preventable major felony (e.g., murder, rape, etc.) then doing nothing can feel like the safer of two bad options. Better to close your eyes and ears to the warning signs and hope for the best than risk making an embarrassing mistake. That’s pretty normal for corpo folk; risk aversion is a natural defensive response to chaotic environments like the modern corporation. Moreover, the higher you rise in power and authority, the more difficult it becomes to understand what’s going on under you even as your responsibility for what happens on your watch exceeds your ability to influence your domain. 


Personally, I lean the other way. I’d rather accept the risk of making a mistake (and the cost of making it right) if my organisation’s surveillance efforts can save a life or interdict a trauma. Obviously, I’m biased; everyone is. I believe that a human life is more precious than a bank account balance. That position seems ethically grounded to me. Still, I realize that everyone must decide for themselves how to reconcile their values with their organisation’s policies and priorities.  

You don’t need to be a philosophy major to interrogate your personal and professional values. That said, you must understand your values and decide which of them cannot be compromised.

So, yeah. I empathize with the parents, students, and school staff interviewed in Bryan and Lurye’s article. I’d argue that many of the examples cited in their piece could – should, really – have been handled better by thorough planning, consistent execution, redundant accountability controls, and clear instructions. This is what a mature and responsible security department would insist on before initiating an internal surveillance program.


Still, Bryan and Lurye were talking about school systems … consistently under-resourced organisations where the technology barely works on most days. It’s going to be darned challenging for a school district that can only afford the bare minimum of IT administrators to run something as complicated as a formal internal surveillance program. Hell, I’ve yet to meet a dedicated cybersecurity employee for a school district; usually that’s an additional duty thrust on IT support if it’s performed at all. 


Businesses and government agencies, on the other hand, don’t have that excuse. Most outfits have the staff and the resources they need to run a comprehensive and accountable surveillance program. What they usually lack, I believe, is the moral courage to accept the risk and do what’s necessary to protect their people. Their fear of exposure and financial loss carries more emotional weight than the possibility of a violent crime occurring on their watch sometime in the undefined future. 


[1] If you’ve ever clicked through a boilerplate login warning that declares “Use of this system constitutes consent to monitoring,” US v Larson is the case that settled the “nuh uh!” argument once and for all … at least in the U.S. military. Your milage may vary. Discuss your options with your company lawyers … once they stop screaming. 

Linked InXFacebook
Affino

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. teiss® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543