scarily… They don’t need to to be this creepy, but even I’m a tad baffled by this.

Yesterday me and a few friends were at a pub quiz, of course no phones allowed, so none were used.

It came down to a tie break question of my team and another. “What is the run time of the Lord of the Rings: Fellowship of the ring” according to IMDb.

We answered and went about our day. Today my friend from my team messaged me - top post on his “today feed” is an article published 23 hours ago…

Forgive the pointless red circle… I didnt take the screenshot.

My friend isn’t a privacy conscience person by any means, but he didnt open IMDb or google anything to do with the franchise and hasn’t for many months prior. I’m aware its most likely an incredible coincidence, but when stuff like this happens I can easily understand why many people are convinced everyone’s doom brick is listening to them…

  • Onihikage@beehaw.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    14 hours ago

    How can you catch the right fish, unless you’re routinely casting your fishing net?

    It’s a technique called Keyword Spotting (KWS). https://en.wikipedia.org/wiki/Keyword_spotting

    This uses a tiny speech recognition model that’s trained on very specific words or phrases which are (usually) distinct from general conversation. The model being so small makes it extremely optimized even before any optimization steps like quantization, requiring very little computation to process the audio stream to detect whether the keyword has been spoken. Here’s a 2021 paper where a team of researchers optimized a KWS to use just 251uJ (0.00007 milliwatt-hours) per inference: https://arxiv.org/pdf/2111.04988

    The small size of the KWS model, required for the low power consumption, means it alone can’t be used to listen in on conversations, it outright doesn’t understand anything other than what it’s been trained to identify. This is also why you usually can’t customize the keyword to just anything, but one of a limited set of words or phrases.

    This all means that if you’re ever given an option for completely custom wake phrases, you can be reasonably sure that device is running full speech detection on everything it hears. This is where a smart TV or Amazon Alexa, which are plugged in, have a lot more freedom to listen as much as they want with as complex of a model as they want. High-quality speech-to-text apps like FUTO Voice Input run locally on just about any modern smartphone, so something like a Roku TV can definitely do it.

    • tetris11@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      12 hours ago

      I appreciate the links, but these are all about how to efficiently process an audio sample for a signal of choice.

      My question is, how often is audio sampled from the vicinity to allow such processing to happen.

      Given the near-immediate response of “Hey Google”, I would guess once or twice a second.