If you place an AI model in the EU market after August 2025, you must publish a summary of the data that trained it. These summaries will be updated as models evolve. They cover books, images, audio, video, scraped content, and user input. Companies can protect trade secrets but they must explain sources in plain language.
The line should be drawn at transparency of outcomes, not full disclosure of methods. Businesses developing AI need to disclose clear information on risks, safeguards, impacts, and how decisions affecting the public are made.
If you place an AI model in the EU market after August 2025, you must publish a summary of the data that trained it. These summaries will be updated as models evolve. They cover books, images, audio, video, scraped content, and user input. Companies can protect trade secrets but they must explain sources in plain language.
where should we draw the line between public accountability and the protection of business secrets especially in such a fast moving field like AI?
The line should be drawn at transparency of outcomes, not full disclosure of methods. Businesses developing AI need to disclose clear information on risks, safeguards, impacts, and how decisions affecting the public are made.
https://open.substack.com/pub/hamtechautomation/p/corporate-and-government-battle-for?r=64j4y5&utm_medium=ios