this post was submitted on 10 Jul 2023
89 points (100.0% liked)

Technology

19 readers
2 users here now

This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!

founded 2 years ago
 

In addition to the possible business threat, forcing OpenAI to identify its use of copyrighted data would expose the company to potential lawsuits. Generative AI systems like ChatGPT and DALL-E are trained using large amounts of data scraped from the web, much of it copyright protected. When companies disclose these data sources it leaves them open to legal challenges. OpenAI rival Stability AI, for example, is currently being sued by stock image maker Getty Images for using its copyrighted data to train its AI image generator.

Aaaaaand there it is. They don’t want to admit how much copyrighted materials they’ve been using.

you are viewing a single comment's thread
view the rest of the comments
[–] PabloDiscobar@kbin.social 8 points 1 year ago* (last edited 1 year ago) (2 children)

Your first comment and it is to support OpenAI.

edit:

Haaaa, OpenAI, this famous hippies led, non-profit firm.

2015–2018: Non-profit beginnings

2019: Transition from non-profit

Funded by Musk and Amazon. The friends of humanity.

Also:

In March, Open AI co-founder Ilya Sutskever told The Verge that the company had been wrong to disclose so much in the past, and that keeping information like training methods and data sources secret was necessary to stop its work being copied by rivals.

Yeah, he closed the source code because he was afraid he would get copied by other people.

[–] Chozo@kbin.social 9 points 1 year ago (1 children)

With replies like this, it's no wonder he was hesitant to post in the first place.

There's no need for the hostility and finger pointing.

[–] PabloDiscobar@kbin.social 7 points 1 year ago (1 children)

he was hesitant to post in the first place.

Was he hesitant? how do you know that?

[–] Chozo@kbin.social 7 points 1 year ago

He created his account 2 weeks ago and didn't post until tonight.

[–] nicetriangle@kbin.social 1 points 1 year ago (1 children)

keeping information like training methods and data sources secret was necessary to stop its work being copied by rivals.

I feel like the AI model is going to become self aware before people like Sutskever do

[–] Oswald_Buzzbald@kbin.social 1 points 1 year ago

Someone should just create an actual open source LLM, that can learn and replicate the innovations of all the others, and then just use these companies' arguments about copyright against them.