- cross-posted to:
- stable_diffusion@lemmy.dbzer0.com
- steam@lemmy.ml
- cross-posted to:
- stable_diffusion@lemmy.dbzer0.com
- steam@lemmy.ml
Key points:
- new questions for devs submitting their games (about pre- and live-generated AI content, guardrails preventing generation of anything illegal)
- disclaimers on game’s store page
- new system for reporting illegal content straight from in-game overlay
This was likely in reference to Midjourney, which was the system in question in its ruling. Midjourney, even for its time had very rudimentary user controls way behind the open standards that likely didn’t impress the registrar.
There’s also a spectrum of involvement depending on what tool you’re using. I know with web based interfaces don’t allow for a lot of freedom due to wanting to keep users from generating things outside their terms of use, but with open source models based on Stable Diffusion you can get a lot more involved and get a lot more freedom. We’re in a completely different world from March 2023 as far as generative tools go.
Take a look at the difference between a Midjourney prompt and a Stable Diffusion prompt.
Midjourney:
a 80s hollywood sci-fi movie poster of a gigantic lemming attacking a city, with the title "Attack of the Lemmy!!" --ar 3:5 --v 6.0
Stable Diffusion:
sarasf, 1girl, solo, robe, long sleeves, white footwear, smile, wide sleeves, closed mouth, blush, looking at viewer, sitting, tree stump, forest, tree, sky, traditional media, 1990s \(style\), <lora:sarasf_V2-10:0.7>
Negative prompt: (worst quality, low quality:1.4), FastNegativeV2
Steps: 21, VAE: kl-f8-anime2.ckpt, Size: 512x768, Seed: 2303584416, Model: Based64mix-V3-Pruned, Version: v1.6.0, Sampler: DPM++ 2M Karras, VAE hash: df3c506e51, CFG scale: 6, Clip skip: 2, Model hash: 98a1428d4c, Hires steps: 16, "sarasf_V2-10: 1ca692d73fb1", Hires upscale: 2, Hires upscaler: 4x_foolhardy_Remacri, "FastNegativeV2: a7465e7cc2a2",
ADetailer model: face_yolov8n.pt, ADetailer version: 23.11.1, Denoising strength: 0.38, ADetailer mask blur: 4, ADetailer model 2nd: Eyes.pt, ADetailer confidence: 0.3, ADetailer dilate erode: 4, ADetailer mask blur 2nd: 4, ADetailer confidence 2nd: 0.3, ADetailer inpaint padding: 32, ADetailer dilate erode 2nd: 4, ADetailer denoising strength: 0.42, ADetailer inpaint only masked: True, ADetailer inpaint padding 2nd: 32, ADetailer denoising strength 2nd: 0.43, ADetailer inpaint only masked 2nd: True
To break down a bit of what’s going on here, I’d like to explain some of the elements found here.
sarasf
is the token for the LoRA of the character in this image, and<lora:sarasf_V2-10:0.7>
is the character LoRA for Sarah from Shining Force II. LoRA are like supplementary models you use on top of a base model to capture a style or concept, like a patch. Some LoRA don’t have activation tokens, and some with them can be used without their token to get different results.The .07 in
<lora:sarasf_V2-10:0.7>
refers to the strength at which the weights from the LoRA are applied to the output. Lowering the number causes the concept to manifest weaker in the output. You can blend styles this way with just the base model or multiple LoRA at the same time at different strengths. You can even take a monochrome LoRA and take the weight into the negative to get some crazy colors.The Negative Prompt is where you include things you don’t want in your image.
(worst quality, low quality:1.4),
here have their attention set to 1.4, attention is sort of like weight, but for tokens. LoRA bring their own weights to add onto the model, whereas attention on tokens works completely inside the weights they’re given. In this negative promptFastNegativeV2
is an embedding known as a Textual Inversion. It’s sort of like a crystallized collection of tokens that tell the model something precise you want without having to enter the tokens yourself or mess around with the attention manually. Embeddings you put in the negative prompt are known as Negative Embeddings.In the next part,
Steps
stands for how many steps you want the model to take to solve the starting noise into an image. More steps take longer.VAE
is the name of the Variational Autoencoder used in this generation. The VAE is responsible for working with the weights to make each image unique. A mismatch of VAE and model can yield blurry and desaturated images, so some models opt to have their VAE baked in,Size
is the dimensions in pixels the image will be generated at.Seed
is the number representation of the starting noise for the image. You need this to be able to reproduce a specific image.Model
is the name of the model used, andSampler
is the name of the algorithm that solves the noise into an image. There are a few different samplers, each with their own trade-offs for speed, quality, and memory usage.CFG
is basically how close you want the model to follow your prompt. Some models can’t handle high CFG values and flip out, giving over-exposed or nonsense output.Hires steps
represents the amount of steps you want to take on the second pass to upscale the output. This is necessary to get higher resolution images without visual artifacts.Hires upscaler
is the name of the model that was used during the upscaling step, and again there are a ton of those with their own trade-offs and use cases.After
ADetailer
are the parameters for Adetailer, an extension that does a post-process pass to fix things like broken anatomy, faces, and hands. We’ll just leave it at that because I don’t feel like explaining all the different settings found there.https://youtu.be/-JQDtzSaAuA?t=97
https://youtu.be/1d_jns4W1cM
https://www.youtube.com/watch?v=HtbEuERXSqk
Damn, that’s a good chunk of info! Thanks for taking the time to go into details on how things work.
You’re very welcome. My head still hurts.
Is this your comfy workflow, or from someone else?
Someone else.