The Phoebus cartel strikes again!
The Phoebus cartel strikes again!
I expected that recording would be the hard part.
I think some of the open-source ones should work if your phone is rooted?
I’ve heard that Google’s phone app can record calls (though it says it aloud when starting the recording). Of course, it wouldn’t work if Google thinks it shouldn’t in your region.
By the way, Bluetooth headphones can have both speakers and a microphone. And Android can’t tell a peripheral device what it should or shouldn’t do with audio streams. Sounds like a fun DIY project if you’re into it, or maybe somebody sells these already.
Haven’t heard of all-in-one solutions, but once you have a recording, whisper.cpp can do the transcription:
The underlying Whisper models are MIT.
Then you can use any LLM inference engine, e.g. llama.cpp, and ask the model of your choice to summarise the transcript:
You can also write a small bash/python script to make the process a bit more automatic.
I enjoy xenharmonic music and modern academic music the most, but I’m not familiar with everything there, so any recommendations are welcome if you, reader, have something in your mind.
Because we have tons of ground-level sensors, but not a lot in the upper layers of the atmosphere, I think?
Why is this important? Weather processes are usually modelled as a set of differential equations, and you want to know the border conditions in order to solve them and obtain the state of the entire atmosphere. The atmosphere has two boundaries: the lower, which is the planet’s surface, and the upper, which is where the atmosphere ends. And since we don’t seem to have a lot of data from the upper layers, it reduces the quality of all predictions.
It would. But it’s a good option when you have computationally heavy tasks and communication is relatively light.
TOTP can be backed up and used on several devices at least.
Once configured, Tor Hidden Services also just work (you may need to use some fresh bridges in certain countries if ISPs block Tor there though). You don’t have to trust any specific third party in this case.
If config prompt = system prompt, its hijacking works more often than not. The creators of a prompt injection game (https://tensortrust.ai/) have discovered that system/user roles don’t matter too much in determining the final behaviour: see appendix H in https://arxiv.org/abs/2311.01011.
Like Firefox ScreenshotGo? (I think it only supports English though)
xkcd.com is best viewed with Netscape Navigator 4.0 or below on a Pentium 3±1 emulated in Javascript on an Apple IIGS at a screen resolution of 1024x1. Please enable your ad blockers, disable high-heat drying, and remove your device from Airplane Mode and set it to Boat Mode. For security reasons, please leave caps lock on while browsing.
CVEs are constantly found in complex software, that’s why security updates are important. If not these, it’d have been other ones a couple of weeks or months later. And government users can’t exactly opt out of security updates, even if they come with feature regressions.
You also shouldn’t keep using software with known vulnerabilities. You can find a maintained fork of Chromium with continued Manifest V2 support or choose another browser like Firefox.
You can get your hands on books3 or any other dataset that was exposed to the public at some point, but large companies have private human-filtered high-quality datasets that perform better. You’re unlikely to have the resources to do the same.
Very cool and impressive, but I’d rather be able to share arbitrary files.
And looks like you can only send images in DMs, but not in groups/forums.
If your CPU isn’t ancient, it’s mostly about memory speed. VRAM is very fast, DDR5 RAM is reasonably fast, swap is slow even on a modern SSD.
8x7B is mixtral, yeah.
Mostly via terminal, yeah. It’s convenient when you’re used to it - I am.
Let’s see, my inference speed now is:
As of quality, I try to avoid quantisation below Q5 or at least Q4. I also don’t see any point in using Q8/f16/f32 - the difference with Q6 is minimal. Other than that, it really depends on the model - for instance, llama-3 8B is smarter than many older 30B+ models.
Have been using llama.cpp, whisper.cpp, Stable Diffusion for a long while (most often the first one). My “hub” is a collection of bash scripts and a ssh server running.
I typically use LLMs for translation, interactive technical troubleshooting, advice on obscure topics, sometimes coding, sometimes mathematics (though local models are mostly terrible for this), sometimes just talking. Also music generation with ChatMusician.
I use the hardware I already have - a 16GB AMD card (using ROCm) and some DDR5 RAM. ROCm might be tricky to set up for various libraries and inference engines, but then it just works. I don’t rent hardware - don’t want any data to leave my machine.
My use isn’t intensive enough to warrant measuring energy costs.
I see!
And it was a stable OS version, not a beta or something? That’s the worst kind of bugs. Hopefully manufacturers start formally verifying hardware and firmware as a standard practice in the future.
Other than what I said in the other reply:
I live in the USA so getting one would be problematic but I hear perhaps not entirely impossible for me.
Looks like it has a US release? If you’re unsure or getting a European version, double-check it’s compatible with American wireless network frequencies &c. Specific operators might also have their own shenanigans.
Do you know how it compares to e.g. Fairphone?
Nope, never tried Fairphone.
LLaMA can’t. Chameleon and similar ones can: