• 2 Posts
  • 98 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle





  • Not so much as stopped feeling nostalgic for, but realizing that there weren’t as many great games available as I thought that haven’t had better successors or remakes. And for Nintendo consoles, non-Nintendo games that stand the test of time are difficult to find outside of a few franchises that usually have more modern versions on Switch.

    We are just spoiled for choice these days when it comes to games, especially with indie games. And indies these days often have better UX than most mainstream games back then.





  • Being entitled to equal rights doesn’t mean they actually get them. It also doesn’t account for the fact that many Palestinians are denied citizenship or remain in occupied territories controlled by Israel and explicitly not guaranteed equal rights

    The comprehensive report, Israel’s Apartheid against Palestinians: Cruel System of Domination and Crime against Humanity, sets out how massive seizures of Palestinian land and property, unlawful killings, forcible transfer, drastic movement restrictions, and the denial of nationality and citizenship to Palestinians are all components of a system which amounts to apartheid under international law. This system is maintained by violations which Amnesty International found to constitute apartheid as a crime against humanity, as defined in the Rome Statute and Apartheid Convention.

    source



  • But simply knowing the right words to say in response to a moral conundrum isn’t the same as having an innate understanding of what makes something moral. The researchers also reference a previous study showing that criminal psychopaths can distinguish between different types of social and moral transgressions, even as they don’t respect those differences in their lives. The researchers extend the psychopath analogy by noting that the AI was judged as more rational and intelligent than humans but not more emotional or compassionate.

    This brings about worries that an AI might just be “convincingly bullshitting” about morality in the same way it can about many other topics without any signs of real understanding or moral judgment. That could lead to situations where humans trust an LLM’s moral evaluations even if and when that AI hallucinates “inaccurate or unhelpful moral explanations and advice.”

    Despite the results, or maybe because of them, the researchers urge more study and caution in how LLMs might be used for judging moral situations. “If people regard these AIs as more virtuous and more trustworthy, as they did in our study, they might uncritically accept and act upon questionable advice,” they write.

    Great, so the headline of the article directly feeds into the issue the scientists are warning about when it comes to public perception of AI morality