9 Comments
User's avatar
Intoobus's avatar

Hey! I saw your post pop up on my homepage and wanted to show some support. If you get a chance, I’d really appreciate a little love on my latest newsletter too always happy to boost each other!

Expand full comment
Digital-Mark's avatar

I would say that after the documentation has been done, a lesson learned chapter can be created for correcting the situation in the future.

Expand full comment
Tech Law Standard's avatar

Thanks Digital-Mark for reading and taking the time to engage with the article. You are absolutely right: having a “lessons learned” phase could be incredibly valuable for both preventing future harm and improving better legal and regulatory responses.

In fact, we explored that idea in the research article by highlighting how some legal systems are beginning to move away from reactive enforcement to more structured, proactive approaches, including audits, risk classifications, and accountability frameworks that could act as those “corrective” chapters you are referring to. These issues have to be discussed side by side.

Also, what’s especially important is that these lessons are not just internal to companies or developers, but are also shared across the legal and tech communities, and even the public, to support better decision-making around AI going forward.

The conclusion also carefully summarises the keypoints and the way forward.

Thanks again for sparking that part of the conversation!

Expand full comment
Digital-Mark's avatar

No worries. I would say that in tandem with risk classification it would be best to insert a risk assessment from multiple sides (legal and civil law and cybersecurity).

Expand full comment
Tech Law Standard's avatar

Thanks again. We completely agree with you that risk assessment needs to be multidimensional. Legal and civil liability, cybersecurity vulnerabilities, and even reputational risks are often intertwined when AI goes wrong. Bringing those perspectives together early in the development and deployment process can actually strengthen both safety and accountability. It also helps ensure that risk is not just treated as a technical glitch but as something with legal and human consequences. We will be addressing some of these issues in a forthcoming newsletter.

Expand full comment
User's avatar
Comment deleted
May 25
Comment deleted
Expand full comment
Tech Law Standard's avatar

Many of these tools operate as black boxes, making it incredibly difficult for anyone to actually trace decisions or failures after something goes awfully wrong. Typically, there is little transparency and trustworthiness when using them, meaningful root cause analysis as you alluded to becomes nearly impossible, and that weakens both accountability and the legal process in its entirety. Can we really afford to treat post-incident review as optional, especially when lives, rights, or public safety are at stake? The stakes are too high to neglect. 😔

Expand full comment
User's avatar
Comment deleted
May 25
Comment deleted
Expand full comment
Tech Law Standard's avatar

The feeling of seeing the danger early, sounding the alarm, and being ignored until it is too late is sadly a familiar occurence in both cybersecurity and now AI. You are right to say that AI is bigger than anything we have faced in our generation, hence why your emphasis on responsible, measurable, and truly aligned AI is so crucial in this day and age. Building systems with foresight (used ethically of course!), not just functionality, is the only way to prevent history from repeating itself on a larger scale.

Expand full comment
User's avatar
Comment deleted
May 27
Comment deleted
Expand full comment
Tech Law Standard's avatar

That is a pattern so many experienced professionals know all too well. Being the one who sees the cliff ahead, only to be ignored until it is way too late is often exhausting. The “we will fix it later” mindset does not work in the real world today; but how much does lack of funds facilitate this lack of proactiveness? Cybersecurity measures are not cheap. We actually just published a new piece today related to a similar issue: https://www.technologylaw.ai/p/web-hosting-services-ftc-godaddy-cybersecurity

Expand full comment
User's avatar
Comment deleted
May 25
Comment deleted
Expand full comment
Tech Law Standard's avatar

That is a totally valid concern, and one that most of us are worried about on a daily basis. AI tools are incredibly becoming more powerful and accessible, therefore, it is natural to see a sharp divide between those building with care and those chasing quick profits with little regard for harm. There is also, you might say, a relatively low barrier to entry meaning people can easily misuse AI for scams, distribution of misinformation, or even for manipulative marketing. We hate to be the bearer of bad news but regulations are slowly catching up (perhaps too slow) and the lawmakers have a long way to go.

Expand full comment