If we’ve learned anything from sci-fi flicks, it’s that AI-driven law-enforcement comes in two flavours: Robocop and ED-209. The former being significantly more attractive than the latter. Currently, certain law-enforcement structures have already begun using future-esque tech in the vein of biometrics and facial recognition, but legislation surrounding the use of this tech is still vague. That’s why Europe is looking to ban its use.
Algorithmic law-enforcement needs some work
The European Parliament has moved for lawmakers to ban the use of algorithm-driven surveillance tools utilised in predictive policing. According to Gizmodo, MEPs have voted in favour of laws against the use of so-called, “automated analysis and/or recognition” technology that law-enforcement authorities rely on in investigations and rulings.
This doesn’t mean anything firm just yet. Parliament can’t actually make and enforce new legislation, only vote on and pass them. It’s up to the European Commission (who recently demanded that smartphone manufacturers provide several more years of software support for their devices) to actually develop new laws.
That said, with the ECs track record, the European Parliament might just get its way. The basis for Parliament’s stance on automated analysis systems is that the legislation surrounding them just isn’t ready yet, and that poses major risks to citizens’ personal privacy. Should the law come to be, law enforcement would be prohibited from utilising biometric surveillance technology (in the form of facial recognition software, voice recognition and the like) and, additionally, private companies would be banned from using biometric databases within the EU’s borders.
It’s a bold stance that opens up an important debate. As effective as algorithmic policing can be, without the proper scaffolding to regulate it can pose a serious (and dystopian) threat to overall personal privacy. And, as we’ve all come to accept, privacy means everything these days.