this post was submitted on 30 Sep 2023
555 points (97.8% liked)

World News

38878 readers
2019 users here now

A community for discussing events around the World

Rules:

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.


Lemmy World Partners

News !news@lemmy.world

Politics !politics@lemmy.world

World Politics !globalpolitics@lemmy.world


Recommendations

For Firefox users, there is media bias / propaganda / fact check plugin.

https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] mindbleach@sh.itjust.works -1 points 1 year ago (1 children)

My guy. We can pretty confident that expensive training using human-labeled data did not include child pornography. Nobody just slipped in the sort of images that are illegal to even look at.

[–] ndsvw@feddit.de 1 points 1 year ago

What do you mean "human labeled"? They train it with as much data as possible and humans don't validate each byte of it.

How do you build an image generating AI? You don't google "dog" and download images and label the image as "black dog". Instead, you scrape the WWW and other sources an download pretty much everything they can and train the AI and give it instructions to not generate certain stuff.

I'm 100% sure that ChatGPT was also trained on text including CP fantasies and I'm 100% sure, the image generators were also trained on images you'd classify as "should not exist"...