That there is no perfect defense. There is no protection. Being alive means being exposed; it’s the nature of life to be hazardous—it’s the stuff of living.

  • 12 Posts
  • 54 Comments
Joined 21 days ago
cake
Cake day: June 9th, 2024

help-circle










  • They are correct in that encoding is a super geeky topic even by the standards of technology discussions.

    It is fascinating to see how encoding has changed across generation. Take a relatively high bitrate source file and encode it with XviD, x265, x265 and whatever is the top AV1 codec at the same bitrate x resolution.

    Not surprisingly, the biggest jump in quality will be from XviD to x264, but x265 does offer notable improvements.







  • Thank you for the clarification regarding ASI. That still leaves the question of the definition of “safe ASI”; a key point that is emphasized in their manifesto.

    To use your example it’s like an early mass market car industry professional (say in 1890) discussing road safety and ethical dilemmas in roads dominated by regular drivers and a large share of L4/L5 cars (with some of them being used as part-time taxis). I just don’t buy it.

    Mind you I am not anti-ML/AI. I am an avid user of “AI” (ML?) upscaling (specifically video) and to lesser extent stable diffusion. While AI video upscaling is very fiddly and good results can be hard to get right, it is clearly on another level with respect to quality compared to “classical” upscaling algorithms. I was truly impressed when I was able to run by own SD upscale with good results.

    What I am opposed to is oligarchs, oligarch-wanabees, shallow sounding proclamations of grandiose this or that. As far as I am concerned it’s all bullshit and they are all to one degree or another soulless ghouls that will eat your children alive for the right price and the correct mental excuse model (I am only partially exaggerating, happy to clarify if needed) .

    If one has all these grand plans for safe ASI, concern for humanity and whatnot, setup a public repo and release all your code under GPL (and all relevant documentation, patent indemnification, no trademark tricks etc.). Considering Sutskever’s status as AI royalty who is also allegedly concerned about humanity, he would be the ideal person to pull this off.

    If you can’t do that, then chances are you’re lying about your true motives. It’s really as simple as that.



  • I don’t consider tech company boardroom drama to be an indicator of anything (in of itself). This is not some complex dilemma around morality and “doing the right thing”.

    Is my take on their PR copytext unreasonable? Is my interpretation purely a matter of subjectivity?

    Why should I buy into this “AI god-mommy” and “skynet” stuff? Guy can’t even provide a definition of “superintelligence”. Seems very suspicious for a “top mind in AI” (paraphrasing your description).

    Don’t get me wrong, I am not saying he acts like a movie antagonist IRL, but that doesn’t mean we have any reason to trust his motives or ignore the long history of similar proclamations.




  • This honestly looks like a grift to get a nice salary for a few years on VC money. These are not random sales goons peddling shit they don’t understand. They don’t even bother to define “superintelligence”, let alone what they mean by “safe superintelligence” .

    I find it hard to believe this wasn’t written with malicious intent. But maybe I am too cynical and they are so used to people kissing their asses, that they think their shit doesn’t smell. But money definitely plays some role in this, they would be stupid to not cash in while the AI hype is hot.