• CheeseNoodle@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    7
    ·
    4 months ago

    iirc it recently turned out that the whole black box thing was actually a bullshit excuse to evade liability, at least for certain kinds of model.

      • CheeseNoodle@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        1
        ·
        4 months ago

        This ones from 2019 Link
        I was a bit off the mark, its not that the models they use aren’t black boxes its just that they could have made them interpretable from the beginning and chose not to, likely due to liability.

    • Johanno@feddit.org
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      4 months ago

      Well in theory you can explain how the model comes to it’s conclusion. However I guess that 0.1% of the “AI Engineers” are actually capable of that. And those costs probably 100k per month.

    • Tryptaminev@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      3 months ago

      It depends on the algorithms used. Now the lazy approach is to just throw neural networks at everything and waste immense computation ressources. Of course you then get results that are difficult to interpret. There is much more efficient algorithms that are working well to solve many problems and give you interpretable decisions.