I found this while browsing Reddit, and going past the first reaction of “this is terrible” I think it can spark an interesting discussion on machine learning and how our own societal problems can end up creating bad habits and immortalizing those issues when building such systems.
So, what do you guys think about this?
The problem with AI is that we don’t “program” it directly. It learns on its own, absorbing any data you throw at it and naïvely interpreting it. Just like a small child might make inappropriate comments based on what they have heard, since being respectful to other people requires awareness of them.
Exactly. The problem is, with a small child, you can properly teach it what’s right and wrong, while with AI it’s much more complicated to do so. There should be some consideration taken by people who develop this kind of software (in this case Google) about the issues it can create, since it basically parrots societal behaviors.