Discussion about this post

User's avatar
Maximilian Press's avatar

I get all of these arguments, but the issue for me is not really "Biology is complicated too!" but that model interpretability / introspectability can be an indispensable aid to validation. I can look at a linear regression model and if its coefficients are wack compared to my science headcanon, that gives me important information about the usefulness of that model. This comes directly into your statement "Understanding the data behind an AI model is comparable to understanding the theory behind a conventional model." Exactly- but there is no step where I can go see whether the model passes the smell test!

I wouldn't argue that therefore there is no usefulness for AI models in biology (I have used them happily! And also less happily), but rather that AI models have a significantly higher burden of proof for their usefulness and applicability.

My pitiful human focalization, which you rightfully denigrate, indeed makes a mockery of the glory of biology. But I can at least evaluate other commensurable mockeries.

With a low-interpretability model, I've made my life much worse! Now I have to understand _two_ irreducibly complex systems- one of which doesn't even have any relevance to my biological question.

That's not to say that there isn't an impressive array of statistical tools for doing the evaluation I'm asking for. And I'd also never claim that there was a viable alternative for AI in many applications.

Just: remember that at the end of the day a human still has to make sense of it all. Or nothing we are doing matters at all.

Expand full comment

No posts