⚡️Trendbreak #22⚡️

Haileo 🇮🇪

This week, we're zoning in on a headline maker: the European Union's bold effort to craft a regulatory structure around AI 🇪🇺⚖️🤖. It's uncharted territory, and the game plan is to stratify risks in a top-down order of severity. "Unacceptable risk" applications (like social scoring systems and mass biometric surveillance) would hit a wall, while those with medium or high risk would need to pass internal checks and possibly third-party audits - think healthcare for instance.
In the same vein as GDPR, the forthcoming AI Act (official version here) promises to send shockwaves well beyond European borders. Why? For starters, it'll impact almost 450 million people, a substantial user base for this sort of tech. Also, being the first of its kind, it's bound to be the yardstick for future regulations.
For a comprehensive, American-centric take on this, give this article from the MIT Technology Review a read.

Moving on, we've got a thought-provoking piece from Quanta Magazine exploring the hard nut to crack that is devising robust tests for assessing AI comprehension levels. You've likely heard of the Turing Test; a more recent and elegantly designed alternative is the Winograd Schema Challenge.
The author's takeaway is that despite AI heavyweights like Watson and GPT-3 nailing existing benchmarks, their understanding abilities remain up in the air. It shows just how tricky defining this concept can be.

Code review has emerged as a major boon for software engineers in recent decades 🧑‍💻👩‍💻. Alberto Bacchelli, an associate professor at the University of Zurich, and his team are breaking new ground in empirical software engineering, hoping to shine a light on the productivity gains it brings.
At the latest edition of the StrangeLoop ∞ conference, he unveiled his findings on cognitive biases that can hamper review efficiency, as well as some ways to dodge them.

Wishing you a week brimming with new learnings! 🤓

By @Clément Chastagnol in
Tags : #Trendbreak,