web3cafe[.]in/artificial-intelligence/story/university-of-chicago-develops-nightshade-to-prevent-ai-training-on-artists-digital-works-699525-2023-10-24
glaze.cs.uchicago[.]edu/index.html
glaze.cs.uchicago[.]edu/downloads.html
AI models are trained using billions of images they collect from the web without permission or notification, regardless any terms of use or (C). When you ask an image generator for a picture of a Cocker Spaniel for example, it'll use one of those images it collected. If you've got any photos or images you've uploaded you may not mind, or you may find that someone's used one of these image generators to create fake porn that uses a photo you posted to Facebook. If WR was still doing his comprehensive game reviews, I imagine he wouldn't be keen on someone using AI to post a review they get paid for using images it took him hours to capture. Most people know about Bing Chat & Chat GPT, but anyone can download AI software, including basic training data sets, so there's thousands of AI models out there, including those being actively developed by cyber criminals.
Anyway, right now there's nothing anyone can do to prevent anyone from scraping the web for images to train AI. Researchers at the University of Chicago have a potential cure they're working on called Nightshade, that they plan to incorporate into their existing software called Glaze. Both alter images in a way that's invisible to us, but messes with AI. Nightshade takes it a step further, *poisoning* images, so if an AI model tries to analyze too many of these poisoned images, it will break.