The catarrhine who invented a perpetual motion machine, by dreaming at night and devouring its own dreams through the day.

  • 1 Post
  • 491 Comments
Joined 6 months ago
cake
Cake day: January 12th, 2024

help-circle

  • All languages are the result of the collective brainfarts of lazy people. English is not special in this regard.

    What you’re noticing is two different sources of new words: making at home and borrowing it from elsewhere.

    For a Germanic language like English, “making at home” often involves two things:

    • compounding - pick old word, add a new root, the meaning is combined. Like “firetruck” - a “truck” to deal with “fire”. You can do it recursively, and talk for example about the “firetruck tire” (the space is simply an orthographic convention). Or even the “firetruck tire rubber quality”.
    • affixation - you get some old word and add another non-root morpheme. Like “home” → “homeless” (no home) → “homelessness” (the state of not having a home).

    The other source of vocabulary would be borrowings. Those words aren’t analysable as the above because they’re typically borrowed as a single chunk (there are some exceptions though).

    Now, answering your question on “why”: Norman conquest gave English a tendency to borrow words for “posh” concepts from Norman, then French. And in Europe in general there’s also a tendency to borrow posh words from Latin and Greek.




  • Next on the news: “Hitler ate bread.”

    I’m being cheeky, but I don’t genuinely think that “Nazi are using a tool that is being used by other people” is newsworthy.

    Regarding the blue octopus, mentioned in the end of the text: when I criticise the concept of dogwhistle, it’s this sort of shit that I’m talking about. I don’t even like Thunberg; but, unless there is context justifying the association of that octopus plushy with antisemitism, it’s simply a bloody toy dammit.



  • Impacted nomenclature:

    • positron negatron - the antiparticle that annihilates in contact with an electron
    • electronegativity electropositivity - property typically associated with nonmetallic elements, specially fluorine and oxygen.
    • electropositivity electronegativity - counterpart of the above that nobody cares about
    • reduction elevation - half-reaction where a substance retrieves more electrons, thus “elevating” its charge; the opposite of oxidation
    • oxidation-reduction oxidation-elevation - the full reaction. Also called “elevation-oxidation”.
    • redox elox - acronym for the above.

  • So Mint can perform the same role as a tablet

    Yeah, you could argue that Mint allows that laptop to perform the same role as a tablet; it’s at most used for simple image edition, web browsing, and listening music through the SMB network (from my computer because hers has practically no storage).

    Without a Linux distro the other options would be to “perform” as electronic junk or virus breeding grounds.

    I keep seeing these posts and comments, trying to convince people This Is The Year of The Linux Desktop.

    Drop off the strawman. That is neither what the author of the article said, nor what I did.

    The rest of your comment boils down to you noisily beating that strawman to death, and can be safely disregarded as such.


  • From HN comments:

    I just used Groq / llama-7b to classify 20,000 rows of Google sheets data (Sidebar archive links) that would have taken me way longer… Every one I’ve spot checked right now has been correct, and I might write another checker to scan the results just in case. // Even w/ a 20% failure it’s better than not having the classifications

    I classified ~1000 GBA game roms files by using their file names to put each in a category folder. It worked like 90% correctly. Used GPT 3.5 and therefore it didn’t adhere to my provided list of categories but they were mostly not wrong otherwise.

    Both are best case scenarios for the usage of LLMs: simple categorisation of stuff where mistakes are not a big deal.

    [A] I work at Microsoft, though not in AI. This describes Copilot to a T. The demos are spectacular and get you so excited to go use it, but the reality is so underwhelming.

    [B] Copilot isn’t underwhelming, it’s shit. What’s impressive is how Microsoft managed to gut GPT-4 to the point of near-uselessness. It refuses to do work even more than OpenAI models refuse to advise on criminal behavior. In my experience, the only thing it does well is scan documents on corporate SharePoint. For anything else, it’s better to copy-paste to a proper GPT-4 yourself.

    [C] lol I can’t help but assume that people who think copilot is shit have no idea what they are doing.

    [D] I have it enabled company-wide at enterprise level, so I know what it can and can’t do in day-to-day practice. // Here’s an example: I mentioned PowerPoint in my earlier comment. You know what’s the correct way to use AI to make you PowerPoint slides? A way that works? It’s to not use the O365 Copilot inside PowerPoint, but rather, ask GPT-4o in ChatGPT app to use Python and pandoc to make you a PowerPoint.

    A: see, it’s this kind of stuff that makes me mock HN as “Reddit LARPing as h4x0rz”. If a Reddit comment starts out by prefacing the alleged authority of the author over a subject, and then makes a claim, there’s high likelihood that the claim is some obtuse shit. Like this - the problem is not just LLMs, it’s Copilot being extra shite.

    B: surprisingly sane comment for HN standards, even offering a way to prove their own claim.

    C: yeah of course you assume = make shit up. Specially about things that you cannot reliably know. And while shifting the discussion from “what” is said to “who” says it. Muppet.

    Author makes good points but suffers from “i am genius and you are an idiot” syndrome which makes it seem mostly the ranting of an asshole vs a coherent article about the state of AI.

    Emphasis mine. It’s like “C” from the quote above, except towards the author of the article. Next~

    I didn’t find this article refreshing. If anything, it’s just the same dismissive attitude that’s dominating this forum, where AI is perceived as the new blockchain. An actually refreshing perspective would be one that’s optimistic.

    I’m glad to see that I’m not the only one who typically doesn’t bother reading HN comments. This guy doesn’t either - otherwise they’d know that most comments are in the opposite direction, blinded with idiocy/faith/stupidity (my bad, I listed three synonyms for the same thing.)

    I’m just going to say it. // The author is an idiot who is using insults as a crutch to make his case.

    I’m just going to say it: the author of this comment is an idiot who is using insults as a crutch to make his case.

    I’m half-joking by being cheeky with the recursion. (It does highlight the hypocrisy though; the commenter is whining about insults while insulting the author.)

    Serious now: if you’re unable to extract the argumentation from the insults, or to understand why the insults are there (it’s a rant dammit), odds are that you’d do a great favour for everyone on the internet by going offline. Forever.


    “But LLMs are intellig–” PILEDRIVE TIME!!!


  • *slow clapping*

    I’m actually quite interested in machine learning and generative models, specially LLMs. But… frankly? I wish that I was the one saying everything that the author said, including his dry humour. And more importantly, I think that he is being spot on.

    People are selling generative models like they were a magical answer for everything and a bit more. It is not. It is just a bloody tool dammit. Sometimes the best for a job, sometimes helpful, sometimes even harmful. And the output is not trustable, and this is a practical problem because it means that you need to cross-check every bloody iot of the output for potential errors.


    I think that I’ll join in and drop my own “angry” rant: I want to piledrive the next muppet who claims that the current models are intelligent.

    inb4:

    1. “But in the fuchure…” - Vomiting certainty over future events.
    2. “Do you have proofs it is not intellijant?” - Inversion of the burden of the proof. Prove me that there’s no dragon orbiting Pluto, or that your mum didn’t get syphilis from sharing a cactus dildo with Hitler.
    3. “Ackshyually wut u’re definishun of intellijans?” - If you’re willing to waste my time with the “definitions game”, I hope that you’re fine wasting hours defining what a “dragon” is, while I “conveniently” distort the definition to prevent you from proving the above.
    4. “y u a sceptic? I dun unrurrstand” - shifting the focus from the topic to the person voicing it. Even then, let’s bite: what did you expect, F.A.I.TH. (filthy assumptions instead of thinking)? Go join a temple dammit. And don’t forget to do some silly chanting while burning an effigy.
    5. “Ackshyually than ppl r not intelljant” - you’re probably an example of that. However that does not address your claim. Sucks to be you.

    Based on real discussions. Misspelled for funzies.


  • To reinforce the author’s views, with my own experience:

    I’ve been using Linux for, like, 20 years? Back then I dual booted it with XP, and my first two distros (Mandriva and Kurumin) are already discontinued. I remember LILO.

    So I’m probably a programmer, right? …nope, my grads are Linguistics and Chemistry. And Linux didn’t make me into a programmer either, the most I can do is to pull out a 10 lines bash script with some websearch.

    So this “Linux is for programmers” myth didn’t even apply to the 00s, let alone now.

    You need a minimum of 8GB of RAM and a fairly recent CPU to do any kind of professional work at a non-jittery pace [in Windows]. This means that if you want to have a secondary PC or laptop, you’ll need to pay a premium for that too.

    Relevant detail: Microsoft’s obsession with generative models, plus its eagerness to shove wares down your throat, will likely make this worse. (You don’t use Copilot? Or Recall? Who cares? It’ll be installed by default, running in the background~)

    Linux, on the other hand, can easily boot up on a 10-year-old laptop with just 2GB of RAM, and work fine. This makes it the perfect OS for my secondary devices that I can carry places without worrying about accidental damage.

    My mum is using a fossil like this. It has 4GB or so; it’s a bit slow but it works with an updated Mint, even if it wouldn’t with Windows 10.

    Sure, you can delay an update [in Windows], but it’s just for five weeks.

    I gave the link a check… what a pain. For reference, in Linux Mint, MATE edition:

    That’s it. You click a button. It’s probably the same deal in other desktop environments.


  • By far, my biggest issue with flags in r/place and Canvas does not apply to a (like you said) 20x30. It’s stuff like this:

    \

    People covering and fiercely defending huge chunks of the canvas, for something that is completely unoriginal, repetitive, and boring. And yet it still gets a pass - unlike, say, The Void; everyone fights The Void.

    Another additional issue that I have has to do with identity: the reason why we [people in general] “default” to a national flag, for identity, is because our media and governments bomb us with a nationalistic discourse, seeking to forge an identity that “happens” to coincide with that they want.

    But, once we go past that, there are far more meaningful things out there to identify ourselves with - such as our cultures and communities, and most of the time they don’t coincide with the countries and their flags.

    As such I don’t think that this is a discourse that we should promote, through the usage of the symbols associated with that discourse.

    Maybe where you’re from it’s easy to separate your government flag as its own symbol that doesn’t represent real people

    I think that this is more of a matter of worldview than where we’re from, given that some people in Brazil spam flags in a way that strongly resembles how they do it in USA.




  • Update: so far my best string was lvxferre/Hello+Fediverse+2393194, yielding 0000006a 48...

    I also did some simple optimisations of the code. Basically “the least you do, the faster it’ll be”.

    i=7100000
    while true; do
    	o=$(echo "lvxferre/Hello+Fediverse+$i" | sha256sum)
    	if [[ "$o" == 00000* ]]; then echo "$o $i"; fi
    	if [[ "$i" == *00000 ]]; then echo "tried $i combinations..."; fi
    	i=$[$i+1]
    	done
    

    Now it’ll show results with more than five leading zeroes, and print a message every 100k tries (to resume later on).

    My machine is a potato, mind you. I don’t expect to get into the leaderboard. Still, I’m doing this as a bash exercise.


  • OK… here’s some dumb bash shit.

    #!/bin/bash
    i=0; z=0
    while [[ $i -le 1000000000000 ]]; do
    	o=$(echo "lvxferre/Hello+Fediverse+$i" | sha256sum)
    	if [[ $o =~ ^($z) ]]; then
    		echo "$i: $o"
    		declare -g z="$z""0"
    		fi
    	if [[ $i == *000000 ]]; then
    		echo "$(expr $i / 1000000)M combinations tried..."
    		fi
    	i=$[$i+1]
    	done
    

    Feel free to use it. Just make sure to change lvxferre/Hello+Fediverse+ to something else.

    What it does: it generates the SHA256sum for strings starting with whatever you want, and ending in a number, between 0 and 10¹². Then if it finds one starting with “z” zeroes, it prints it alongside the number; then it looks for strings with an additional zero at the start. Each million tests it’ll also print some output so you know that the script didn’t freeze.


  • Here’s the content of the OP. Relevant tidbit: it was posted in r/ChatGPT.

    I followed these steps, but just so happened to check on my mason jar 3-4 days in and saw tiny carbonation bubbles rapidly rising throughout.

    I thought that may just be part of the process but double checked with a Google search on day 7 (when there were no bubbles in the container at all).

    Turns out I had just grew a botulism culture and garlic in olive oil specifically is a fairly common way to grow this bio-toxins.

    Had I not checked on it 3-4 days in I’d have been none the wiser and would have Darwinned my entire family.

    Prompt with care and never trust AI dear people…

    Okay… this is a lot like saying “whales are fish, all fish live in the sea, so whales live in the sea”. As in: right conclusion, idiotic reasoning.

    No, cold infused garlic oil is not safe; that conclusion is correct. However that’s because you simply don’t bloody know what’s there, it’s like playing a Russian roulette - it might be clean, or it might be tainted.

    In other words you can’t simply vomit certainty like “I just grew a botulism culture” from the presence of carbonation bubbles dammit. Plenty healthy fermented food items produce carbonation bubbles, including the beer that I’m drinking now or the sour cabbage on my kitchen counter.

    And, when it comes to LLMs, the same (right conclusion, idiotic reasoning) applies. Yeah, the output of any LLM is as trustable as what the village idiot says when he’s drunk; but you need a bigger sample than just one idiotic output to say so dammit. And the answer in this case is technically correct anyway. (You can infuse it. You can eat the result. But you aren’t sure if you can eat it more than once.)



  • Besides everything that the author already said, Fandom is also a cockroach motel from the PoV of the communities using it: it’s trivial to create a new wiki there, but:

    • you can’t close it down even with universal agreement of your community
    • it has obnoxious forking policies intended to keep the community stuck in Fandom
    • the old Fandom wiki surface in search results before anything better that you can pull out, not due to quality but due to aggressive pursuit of SEO cancer from Fandom’s part.

    For anyone wanting more info, check the Minecraft Wiki. That wiki migrated out of Fandom, so you can see all the barriers imposed by the roach motel.

    Speaking on that. I think that it would be damn great if the Fediverse had deeper integration with self-hosted wikis. Forums like Lemmy are great for discussion, but they suck for long-term storage of information - because eventually the info gets flooded with noise.


  • Yeah, it’s actually good. People use it even for trivial stuff nowadays; and you don’t need a pix key to send stuff, only to receive it. (And as long as your bank allows you to check the account through an actual computer, you don’t need a cell phone either.)

    Perhaps the only flaw is shared with the Asian QR codes - scams are a bit of a problem, you could for example tell someone that the transaction will be a value and generate a code demanding a bigger one. But I feel like that’s less of an issue with the system and more with the customer, given that the system shows you who you’re sending money to, and how much, before confirmation.

    I’m not informed on Tikkie and Klarna, besides one being Dutch and another Swedish. How do they work?