Spot-on.
I spend a lot of time training people how to properly review code, and the only real way to get good at it is by writing and reviewing a lot of code.
With an LLM, it trains on a lot of code, but it does no review per-se… unlike other ML systems, there’s no negative and positive feedback systems in place to improve quality.
Unfortunately, AI is now equated with LLM and diffusion models instead of machine learning in general.
This is a horribly written article about an exciting discovery.
Essentially, they’ve discovered that some humans don’t actually have the AnWj antigen, where it was assumed that all humans had some antigen configuration. And they’ve found a way to test for the missing antigens.