PyData Amsterdam 2023

Ok, doomer
09-15, 13:00–13:30 (Europe/Amsterdam), Foo (main)

AI won't end the world, but it can and is making life miserable for plenty of folks. Instead of engaging with the AI overlords, let's explore a pragmatic set of design choices that all Data Scientists and ML devs can implement right now, to reduce the risks of deploying AI systems in the real world.


Leave the AI boomers to grumble amongst themselves about x-risk and the singularity. Instead let's focus-in on how we can alleviate the real-world harms happening right now.

Too often attempts to identify risks and respond to failure modes of ML and automated systems dive straight into the specifics of model, stack, and implementation. Or worse, add further impenetrable layers of abstraction - the "more models, more problems" syndrome. While it's encouraging to see the ecosystem of explainability tools and ML ops surging, as developers and pragmatists we should always prefer the simplest and cheapest tool in our toolkit which is fit for purpose.

This talk calls attention to a number of existing simple, cheap and effective levers for flagging and reducing risk that are often overlooked.

These are software design fundamentals like timely and contextual feedback loops, or graceful degradation, that are easily forgotten in the rush to market. These pragmatic tools and product design choices can immediately improve visibility, safety and reduce reputational risk for any team implementing AI.

P.S. Better oversight and tooling for our current tech will, by definition, improve our chances of being alerted if an existentially risky intelligence did happen to emerge from the silicon ether, one day. So it's a win win, really. 🤷‍♀️


Prior Knowledge Expected

No previous knowledge expected

Laura is a Design Engineer and Prodigy Teams Product Lead at Explosion AI.

She is the founder of Debias AI, (debias.ai) and the human behind Sweet Summer Child Score (summerchild.dev), Ethics Litmus Tests (ethical-litmus.site), fairXiv (fairxiv.org), the Melbourne Fair ML reading group (groups.io/g/fair-ml). Laura is passionate about feminism, digital rights and designing for privacy. She speaks, writes and runs workshops at the intersection of design and technology.