Emotion artificial intelligence uses biological signals such as vocal tone, facial expressions and data from wearable devices as well as text and how people use their computers, to detect and predict how someone is feeling. It can be used in the workplace, for hiring, etc. Loss of privacy is just the beginning. Workers are worried about biased AI and the need to perform the ‘right’ expressions and body language for the algorithms.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    2
    ·
    4 months ago

    Is that not the first step toward providing aid? Would you rather the AI simply issue a prescription or something?

    Anyway, as I said, I’m not saying this is how it goes. I’m just presenting a view that’s non-dystopian, as was explicitly asked for. The AI could easily be operating under rules that would prevent it from telling anyone else of the trouble it had detected until you give it permission, if that would satisfy your privacy concerns.

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      7
      ·
      4 months ago

      I’d rather not have an “AI” invade my privacy in general.

      The AI could easily be operating under rules that would prevent it from telling anyone else of the trouble it had detected until you give it permission, if that would satisfy your privacy concerns.

      What? That’s not how those “AIs” work at all. lol

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        3
        ·
        4 months ago

        I’m not talking about any specific currently-existing AI, I’m talking about a hypothetical one. It is indeed possible to set up an AI in such a way that it wouldn’t tell anyone else what’s going on. It’s just a computer program, it can be set up however one wants it to be set up.