I boil water in a sauce pot on the stove. Slosh it into my mug. Plunk in a tea bag and set the timer on my microwave for 3:30 so that I don’t forget and over-steep it. No milk. No sugar.
I write code and play games and stuff. My old username from reddit and HN was already taken and I couldn’t think of anything else I wanted to be called so I just picked some random characters like this:
>>> import random
>>> ''.join([random.choice("abcdefghijklmnopqrstuvwxyz0123456789") for x in range(5)])
'e0qdk'
My avatar is a quick doodle made in KolourPaint. I might replace it later. Maybe.
日本語が少し分かるけど、下手です。
Alt: e0qdk@reddthat.com
I boil water in a sauce pot on the stove. Slosh it into my mug. Plunk in a tea bag and set the timer on my microwave for 3:30 so that I don’t forget and over-steep it. No milk. No sugar.
Have you tried Resonance? It’s a mystery adventure game set in modern times where you play as four different characters whose stories interconnect. It’s been a while since I played it (a decade or so?) but I remember that it had an interesting game mechanic that let you use memories like items in various interactions, as well as a number of puzzles that I rather liked the design of.
It’s not a GUI library, but Jupyter was pretty much made for the kind of mathematical/scientific exploratory programming you’re interested in doing. It’s not the right tool for making finished products, but is intended for creating lab notebooks that contain executable code snippets, formatted text, and visual output together. Given your background experience and the libraries you like, it seems like it’d be right up your alley.
It might be easier to just fire up Wireshark and look for relevant traffic when you trigger the action.
Can Z3 account for lost bits? Did it come up with just one solution?
It gave me just one solution the way I asked for it. With additional constraints added to exclude the original solution, it also gives me a second solution – but the solution it produces is peculiar to my implementation and does not match your implementation. If you implemented exactly how the bits are supposed to end up in the result, you could probably find any other solutions that exist correctly, but I just did it in a quick and dirty way.
This is (with a little clean up) what my code looked like:
#!/usr/bin/env python3
import z3
rand1 = 0.38203435111790895
rand2 = 0.5012949781958014
rand3 = 0.5278898433316499
rand4 = 0.5114834443666041
def xoshiro128ss(a,b,c,d):
t = 0xFFFFFFFF & (b << 9)
r = 0xFFFFFFFF & (b * 5)
r = 0xFFFFFFFF & ((r << 7 | r >> 25) * 9)
c = 0xFFFFFFFF & (c ^ a)
d = 0xFFFFFFFF & (d ^ b)
b = 0xFFFFFFFF & (b ^ c)
a = 0xFFFFFFFF & (a ^ d)
c = 0xFFFFFFFF & (c ^ t)
d = 0xFFFFFFFF & (d << 11 | d >> 21)
return r, (a, b, c, d)
a,b,c,d = z3.BitVecs("a b c d", 64)
nodiv_rand1, state = xoshiro128ss(a,b,c,d)
nodiv_rand2, state = xoshiro128ss(*state)
nodiv_rand3, state = xoshiro128ss(*state)
nodiv_rand4, state = xoshiro128ss(*state)
z3.solve(a >= 0, b >= 0, c >= 0, d >= 0,
nodiv_rand1 == int(rand1*4294967296),
nodiv_rand2 == int(rand2*4294967296),
nodiv_rand3 == int(rand3*4294967296),
nodiv_rand4 == int(rand4*4294967296)
)
I never heard about Z3
If you’re not familiar with SMT solvers, they are a useful tool to have in your toolbox. Here are some links that may be of interest:
Edit: Trying to fix formatting differences between kbin and lemmy
Edit 2: Spoiler tags and code blocks don’t seem to play well together. I’ve got it mostly working on Lemmy (where I’m guessing most people will see the comment), but I don’t think I can fix it on kbin.
If I understand the problem correctly, this is the solution:
a = 2299200278
b = 2929959606
c = 2585800174
d = 3584110397
I solved it with Z3. Took less than a second of computer time, and about an hour of my time – mostly spent trying to remember how the heck to use Z3 and then a little time debugging my initial program.
What I’d do is set up a simple website that uses a little JavaScript to rewrite the date and time into the page and periodically refresh an image under/next to it. Size the image to fit the remaining free space of however you set up the iPad, and then you can stick anything you want there (pictures/reminder text/whatever) with your favorite image editor. Upload a new image to the server when you want to change the note. The idea with an image is that it’s just really easy to do and keeps the amount of effort to redo layout to a minimum – just drag stuff around in your image editor and you’ll know it’ll all fit as expected as long as you don’t change the resolution (instead of needing to muck around with CSS and maybe breaking something if you can’t see the device to check that it displays correctly).
There’s a couple issues to watch out for – e.g. what happens if the internet connection/server goes down, screen burn-in, keeping the browser from being closed/switched to another page, keeping it powered, etc. that might or might not matter depending on your particular circumstances. If you need to fix all that for your circumstances, it might be more trouble than just buying something purpose built… but getting a first pass DIY version working is trivial if you’re comfortable hosting a website.
Edit: If some sample code that you can use as a starting point would be helpful, let me know.
My guess is that if browsers as we know them weren’t invented, HyperCard would’ve become the first browser eventually. No idea where things would progress from there or if it’d have been better or worse than the current clusterfuck. Maybe we’d all be talking about our “web stacks” instead of websites, and have various punny tools like “pile” and “chimney” and “staplr”. Perhaps PowerPoint would’ve turned into a browser to compete with it.
If browsers were invented but JavaScript specifically was not, we’d probably all be programming sites in some VB variant like VBScript (although it might be called something different).
You can’t really, as others have pointed out, but I like Philip K Dick’s definition of reality: “Reality is that which, when you stop believing in it, doesn’t go away.”
GPT4-Vision can do it, sort of. It doesn’t have a particularly great understanding of what’s going on in a scene, but it can be used for some interesting stuff. I posted a link a few weeks back to an example from DALL-E Party, which hooks up an image generator and an image describer in a loop: https://kbin.social/m/imageai@sh.itjust.works/t/661021/Paperclip-Maximizer-Dall-E-3-GPT4-Vision-loop-see-comment
merde posted a link in the comments there to the goatpocalypse example – https://dalle.party/?party=vCwYT8Em – which is even more fun.
I mean, we all know what happened when old Godzilla was hoppin’ around Tokyo city like a big playground… right?
Didn’t the GDPR have a data portability rule requiring that sites provide users the ability to easily export their own data? Does that not apply to Lemmy for some reason – or, am I misremembering it? (I remember account data download being a big deal a while back on reddit, but it’s been a few years…)
I tried messing around with the colors a bit in an image editor and this was the best adaptation I could make: https://files.catbox.moe/03k8sc.png
Yeah; I also tried subbing in case that kicks off federation and searched a few titles to see if they ended up in random incorrectly as well (stuff like that happens sometimes with kbin). The magazine has seen a few microblogs mentioning the channel, and it clearly picked up the avatar/icon, description, etc. somehow, but doesn’t seem to be getting any videos as threads/posts and I couldn’t find any floating around disconnected either. I think kbin most likely doesn’t understand what PeerTube is publishing through AP, but there could always be federation weirdness or something.
Doesn’t seem to work right on kbin, unfortunately, although it does show up as a magazine: https://kbin.social/m/thelinuxexperiment_channel@tilvids.com
Reminds me a bit of Kammy Koopa
So I either need something like this that I could host myself (is something like that even feasible?)
The closest thing I could find that already exists is GPT4All Chat with LocalDocs Plugin. That basically builds a DB of snippets from your documents and then tries to pick relevant stuff based on your query to provide additional input as part of your prompt to a local LLM. There are details about what it can and can’t do further down the page. I have not tested this one myself, but this is something you could experiment with.
Another idea – if you want to get more into engineering custom tools – would be to split a document (or documents) you want to interact with into multiple overlapping chunks that fit within the context window (assuming you can get the relevant content out – PyPDF2’s documentation explains why this can be difficult), and then prompt with something like "Does this text contain anything that answers ? ". (May take some experimentation to figure out how to engineer the prompt well.) You could repeat that for each chunk gathering snippets and then do a second pass over all snippets asking the LLM to summarize and/or rate the quality of its own answers (or however you want to combine results).
Basically you would need to give it two prompts: a prompt for the “map” phase that you use to apply to every snippet to try to extract relevant info from each snippet, and a second prompt for the “reduce” phase that combines two answers (which is then chained).
i.e.:
f(a) + f(b) + f(c) + ... + f(z)
where f(a)
is the result of the first extraction on snippet a
and +
means “combine these two snippets using the second prompt”. (You can evaluate in whatever order you feel is appropriate – including in parallel, if you have enough compute power for that.)
If you have enough context space for it, you could include a summary of the previous state of the conversation as part of the prompts in order to get something like an actual conversation with the document going.
No idea how well that would work in practice (probably very slow!), but it might be fun to experiment with.
[coreutils-announce] coreutils-8.31 released [stable]
stat now prints file creation time when supported by the file system,
on GNU Linux systems with glibc >= 2.28 and kernel >= 4.11.
https://lists.gnu.org/archive/html/coreutils-announce/2019-03/msg00000.html
(found thanks to this blog post titled “File Creation Time in Linux”)
I don’t. I use the timer on my microwave.