AI-created child sexual abuse images ‘threaten to overwhelm internet’::Internet Watch Foundation finds 3,000 AI-made abuse images breaking UK law
AI-created child sexual abuse images ‘threaten to overwhelm internet’::Internet Watch Foundation finds 3,000 AI-made abuse images breaking UK law
You are talking about technicalities. For a model to be as good as possible you train on the most accurate data.
It is true that you can take SD, modify it to ignore moral values and then ask for CSAM but if you for example have a bunch of real CSAM and you train it on that data it would be much much better at generating believable CSAM. Which is what these criminals do…