This has nothing to do with centralization. AI companies are already scraping the web for everything useful. If you took the content from SO and split it into 1000 federated sites, it would still end up in a AI model. Decentralization would only help if we ever manage to hold the AI companies accountable for the en masse copyright violations they base their industry on.
Just because something is available to view online does not mean you can do anything you want with it. Most content is automatically protected by copyright. You can use it in ways that would otherwise by illegal only if you are explicitly granted permission to do so.
Specifically, Stack Overflow licenses any content you contribute under the CC-BY-SA 4.0 (older content is covered by other licenses that I omit for simplicity). If you read the license you will note two restrictions: attribution and “share-alike”. So if you take someone’s answer, including the code snippets, and include it in something you make, even if you change it to an extent, you have to attribute it to the original source and you have to share it with the same license. You could theoretically mirror the entire SO site’s content, as long as you used the same licenses for all of it.
So far AI companies have simply scraped everything and argued that they don’t have to respect the original license. They argue that it is “fair use” because AI is “transformative use”. If you look at the historical usage of “transformative use” in copyright cases, their case is kind of bullshit actually. But regardless of whether it will hold up in court (and whether it should hold up in court), the reality is that AI companies are going to use everybody’s content in ways that they have not been given permission to do so.
So for now it doesn’t matter whether our content is centralized or federated. It doesn’t matter whether SO has a deal with OpeanAI or not. SO content was almost certainly already used for ChatGPT. If you split it into 100s of small sites on the fediverse it would still be part of ChatGPT. As long as it’s easy to access, they will use it. Allegedly they also use torrents for input data so even if it’s not publicly viewable it’s not safe. If/when AI data sourcing is regulated and the “transformative use” argument fails in court and if the fines are big enough for the regulation to actually work, then sure the situation described in the OP will matter. But we’ll have to see if that ever happens. I’m not holding my breath, honestly.
The irony is that folks complain about stuff like Discord partly because it cannot be scraped by search engines but that would also protect it from being scraped by AI tools.
Copyright is an artificial, government given Monopoly.
Market Mechanisms don’t work when faced with a Monopoly or work badly in situations distorted by the presence of a Monopoly (which is more this case, since Stack Overflow has a monopoly in the reproduction of each post in that website but the same user could post the same answer elsewhere thus creating an equivalent work).
Pretty much in every situation where Intellectual Property is involved you see the market failing miserably: just notice the current situation with streaming services which would be completelly different if there was no copyright and hence no possibility of exclusivity of distribution of any titles (and hence streaming services would have to compete in terms of quality of service).
The idea that the Free Market is something that works everywhere (or even in most cases) is Politically-driven Magic thinking, not Economics.
This has everything to do with centralization, just not with the one small context for it which you picked.
With real decentralization in place market mechanisms work.
Monopoly situations along with market mechanisms invariably result in centralization (“monopoly” comes from the Greek word for “right of exclusive sale”), hence market mechanism won’t “work” in the sense you mean it in such a scenario, as I explained.
Your argument is circular because it’s like saying that it will work as long as it creates the conditions to make itself work (which is the same as saying “as long as it works”).
Decentralization and distribution should be enforced, yes.
By, for example, institutionalized resistance to anything like IP law, to regulations and certifications allowing bigger fish to call those who can’t afford them, and at the same time by maintaining regulations against obvious fraud.
It’s not a circular argument, you’re just not paying attention.
The friendliness of political systems to decentralization doesn’t correlate much with their alignment in terms of left\right or even authoritarian\libertarian. So in my opinion this should be a third dimension on that political compass everybody’s gotten tired of seeing. And there are many other dimensions to add then, so useless.
Market forces lead to the creation of large corporations that then shut down market forces and undermine fair markets. Once a few big corporations dominate they coordinate their behavior and prices and shut down any new players entering the market. Regulation can counter it to a point, but once the corporations are wealthy enough to dominate government regulation also fails. Right wingers hasten the process by opposing regulation, and have no good answer to how to prevent markets collapsing into monopolies or cartels. I’m not sure anyone has a good answer to that in a capitalist system.
You realize that there have been multiple websites scraped, right? So decentralizing doesn’t solve this issue in particular. Especially when federated sites like Lemmy provide a view of the entire fediverse (more or less).
This has nothing to do with centralization. AI companies are already scraping the web for everything useful. If you took the content from SO and split it into 1000 federated sites, it would still end up in a AI model. Decentralization would only help if we ever manage to hold the AI companies accountable for the en masse copyright violations they base their industry on.
Can you explain how reddit comments or stack overflow answers are “copyright infringement”?
Doesn’t seem relevant to the specific problem this post is about.
Just because something is available to view online does not mean you can do anything you want with it. Most content is automatically protected by copyright. You can use it in ways that would otherwise by illegal only if you are explicitly granted permission to do so.
Specifically, Stack Overflow licenses any content you contribute under the CC-BY-SA 4.0 (older content is covered by other licenses that I omit for simplicity). If you read the license you will note two restrictions: attribution and “share-alike”. So if you take someone’s answer, including the code snippets, and include it in something you make, even if you change it to an extent, you have to attribute it to the original source and you have to share it with the same license. You could theoretically mirror the entire SO site’s content, as long as you used the same licenses for all of it.
So far AI companies have simply scraped everything and argued that they don’t have to respect the original license. They argue that it is “fair use” because AI is “transformative use”. If you look at the historical usage of “transformative use” in copyright cases, their case is kind of bullshit actually. But regardless of whether it will hold up in court (and whether it should hold up in court), the reality is that AI companies are going to use everybody’s content in ways that they have not been given permission to do so.
So for now it doesn’t matter whether our content is centralized or federated. It doesn’t matter whether SO has a deal with OpeanAI or not. SO content was almost certainly already used for ChatGPT. If you split it into 100s of small sites on the fediverse it would still be part of ChatGPT. As long as it’s easy to access, they will use it. Allegedly they also use torrents for input data so even if it’s not publicly viewable it’s not safe. If/when AI data sourcing is regulated and the “transformative use” argument fails in court and if the fines are big enough for the regulation to actually work, then sure the situation described in the OP will matter. But we’ll have to see if that ever happens. I’m not holding my breath, honestly.
The irony is that folks complain about stuff like Discord partly because it cannot be scraped by search engines but that would also protect it from being scraped by AI tools.
Until Discord either starts selling data to OpenAI or they start scraping data from/similar to sites like https://spy.pet/ .
Believe me, I’m not saying Discord is the bastion of hope for data protection or anything like that lol.
This has everything to do with centralization, just not with the one small context for it which you picked.
With real decentralization in place market mechanisms work.
Copyright is an artificial, government given Monopoly.
Market Mechanisms don’t work when faced with a Monopoly or work badly in situations distorted by the presence of a Monopoly (which is more this case, since Stack Overflow has a monopoly in the reproduction of each post in that website but the same user could post the same answer elsewhere thus creating an equivalent work).
Pretty much in every situation where Intellectual Property is involved you see the market failing miserably: just notice the current situation with streaming services which would be completelly different if there was no copyright and hence no possibility of exclusivity of distribution of any titles (and hence streaming services would have to compete in terms of quality of service).
The idea that the Free Market is something that works everywhere (or even in most cases) is Politically-driven Magic thinking, not Economics.
You are not arguing with me. Not reading comments before answering them is disrespectful.
Monopoly situations along with market mechanisms invariably result in centralization (“monopoly” comes from the Greek word for “right of exclusive sale”), hence market mechanism won’t “work” in the sense you mean it in such a scenario, as I explained.
Your argument is circular because it’s like saying that it will work as long as it creates the conditions to make itself work (which is the same as saying “as long as it works”).
Decentralization and distribution should be enforced, yes.
By, for example, institutionalized resistance to anything like IP law, to regulations and certifications allowing bigger fish to call those who can’t afford them, and at the same time by maintaining regulations against obvious fraud.
It’s not a circular argument, you’re just not paying attention.
The friendliness of political systems to decentralization doesn’t correlate much with their alignment in terms of left\right or even authoritarian\libertarian. So in my opinion this should be a third dimension on that political compass everybody’s gotten tired of seeing. And there are many other dimensions to add then, so useless.
Market forces lead to the creation of large corporations that then shut down market forces and undermine fair markets. Once a few big corporations dominate they coordinate their behavior and prices and shut down any new players entering the market. Regulation can counter it to a point, but once the corporations are wealthy enough to dominate government regulation also fails. Right wingers hasten the process by opposing regulation, and have no good answer to how to prevent markets collapsing into monopolies or cartels. I’m not sure anyone has a good answer to that in a capitalist system.
You realize that there have been multiple websites scraped, right? So decentralizing doesn’t solve this issue in particular. Especially when federated sites like Lemmy provide a view of the entire fediverse (more or less).
This is orthogonal to what I’m talking about. I don’t see scraping as a problem.
The person you were replying to was talking about scraping.
Removed by mod
Removed by mod
A reaction developed because of there often being some “eat the rich” types thinking they don’t need brain because they have taken the right position.