aphar ([personal profile] aphar) wrote2021-10-04 03:11 pm

Trusted AI

AI is very useful but people often don't trust it.

The scientific establishment thinks (or claims to think) that the problem is with explainablity (or interpretability): AI is often a "black box" and cannot explain its actions. IMO, this is "looking where the light is, not where the keys were dropped".

The political establishment claims that the problem is that AI actions often contradict the official ideology. E.g., women often receive recommendation to watch "knitting" videos while men get "robotics" (even when the AI does not know the customer's sex - because the customer liked "cooking" or "engineering" before). The "liberal" establishment calls that "gender discrimination".

Personally, I don't trust AI because it's a tool that is not working for me, i.e., I neither own nor control it. E.g., Google Assistance will recommend a product that promotes Google revenue, not my well being. Even more sinister, the AI can be updated remotely, so today it will take care of me and tomorrow it will try to sabotage me.
brevi: (Default)

[personal profile] brevi 2021-10-04 09:05 pm (UTC)(link)
There are successful AI companies whose whole raison d'ĂȘtre is to make decision-making auditable with a "paper trail".