Is there some formal way(s) of quantifying potential flaws, or risk, and ensuring there’s sufficient spread of tests to cover them? Perhaps using some kind of complexity measure? Or a risk assessment of some kind?
Experience tells me I need to be extra careful around certain things - user input, code generation, anything with a publicly exposed surface, third-party libraries/services, financial data, personal information (especially of minors), batch data manipulation/migration, and so on.
But is there any accepted means of formally measuring a system and ensuring that some level of test quality exists?
“When a measure becomes a target, it ceases to be a good measure”. — Goodhart’s law
There are tools to detail the code coverage if your tests. I’ve worked with Istanbul in the past, and it’s helped to point out parts of the code that could use more attention
PitMutation testing is useful. It basically tests how effective your tests are and tells you missed conditions that aren’t being tested.For Java: https://pitest.org
Edit: corrected to the more general name instead of a specific implementation.
Does something like this exist for Python?
So true lol. Mgmt just announced a directive at my work last week that code must have 95-100% coverage.
Meanwhile they hire contractors from india that write the dumbest, most useless tests possible. I’ve worked with many great Indian devs but the contractors we use today all seem like a step down in quality. More work for me I guess