2 Comments
User's avatar
Turing's avatar

If we focus regulation on the biggest spenders in AI, how do we handle smaller developers whose tools end up causing significant harm later?

Expand full comment
TechLaw's avatar

Focusing on major spenders helps capture those most likely to deploy at scale, but it does not mean smaller developers escape all responsibility. Entity-based regulation can still apply proportionate obligations based on risk, regardless of size. We think that if a smaller developer builds a tool that causes significant harm, liability, transparency, or reporting rules can still be triggered. The goal ought to be to prioritise oversight where capacity and potential for wide impact are highest, whil also ensuring there are backstops for emerging risks from smaller developers.

Expand full comment