• Otter@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Here’s something I copied from another post about this, where they asked followup questions to the LLM to see what IT “thought” about the discrepancy and what we should take from it. (I don’t have the real followup questions that were asked, and also this is from an OCR of a screenshot so it’s missing stuff, like the ending bit)

    I apologize for any confusion. The differences in my responses to similar questions could indeed be attributed to limitations in the training data and the algorithms used to generate responses. Here are some issues related to the training data that may have affected my responses:

    1. Bias in Training Data: The training data used to train models like mine can contain biases present in the text from the internet and other sources. These biases can unintentionally influence responses, making them inconsistent or favoring certain perspectives.

    2. Lack of Contextual Understanding: While I strive to provide contextually relevant answers, I do not truly understand the questions or have the ability to interpret them in the same way a human would. This can lead to varying responses even when questions are similar in structure.

    3. Complexity of Topics: Topics like justice in the lsraeli-Palestinian conflict are highly complex and multifaceted. My training data might not adequately capture the depth of these issues, leading to simplifications or variations in responses.

    4. Data Collection Challenges: The data used for training is collected from various sources, and it may not always represent a comprehensive or balanced view of all perspectives on a given topic.

    5. Algorithmic Variability: The response generation process involves algorithms that can introduce variability, even with similar input questions, due to differences in context and the way information is processed.

    To address these issues, it’s essential to use Al-generated responses as a starting point for understanding complex topics but not as the sole source of information. Users should critically evaluate and corroborate information from multiple sources, especially when dealing with contentious

    • Andy@slrpnk.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      That sounds like it was able to provide a pretty sensible assessment of its own limitations.

      I think this sounds like a pretty good implementation of guide rails. Obviously it’s a little jarring to ask for a joke about one group and get a very bland-but-inoffensive joke, and then ask for a joke about another group and hear something like ‘Error: my heuristics indicate low confidence in my ability to provide a joke about that group without saying something that would be considered offensive.’

      But that’s better than having it give an offensive joke. And I think it’s concern is valid. If it’s learned humor from the internet, jokes about Muslims are far more likely to be unintentionally offensive. I hope it learns to tell jokes better, but until then this I think this more of a sign of success than failure.