• fearout@kbin.social
    link
    fedilink
    arrow-up
    25
    ·
    edit-2
    1 year ago

    As was mentioned in another comment, it’s a statistical term that measures the standard deviation. It basically tells you how “far” from the center of the bell curve you are with your data points. The higher the sigma, the less likely it is that an observed event was a fluke.

    For example, 1-sigma event has a ~37% chance of being a “coincidence”, and 2-sigma has a chance of about 4.5%.

    In science, 3-sigma (0.135%) is the first publishable certainty, it’s when something becomes significant enough to start a discussion.

    And 5-sigma is the most common threshold for claiming discovery. 5-sigma events have a 0.0000287% chance of being a coincidence or some random happenstance. Or one in 3.5 million.

    Higgs boson discovery was announced after 5-sigma certainty was reached. It means that if that particle didn’t actually exist, the chance of the experiments producing observed results would be 1 in 3.5 mln.

      • fearout@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Interesting, haven’t heard about that. Can you give an example of how it’s used in business? What is actually measured?

          • fearout@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Thanks. So I guess it doesn’t really measure anything in that field. Looks more like a strategy guideline and a set of techniques.

            • enkers@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              1 year ago

              If you read the section on etymology of the business term, it was referring to the metric of quality control. Basically it means your QC is so good that a bad unit gets shipped only a tiny fraction of the time.

              Processes that operate with “six sigma quality” over the short term are assumed to produce long-term defect levels below 3.4 defects per million opportunities (DPMO). The 3.4 dpmo is based on a “shift” of ± 1.5 sigma explained by Mikel Harry.