13 Comments
User's avatar
Jay Cee's avatar

I'm concerned with nefarious vibe coders creating scams. I was in a workshop for AI agent development and at least half the attendees were chasing get rich quick/crypto/stock schemes and obnoxious marketing campaigns for their home business or wanting to automate social media generated slop. Scary.

Expand full comment
Tech Law Standard's avatar

That is a totally valid concern, and one that most of us are worried about on a daily basis. AI tools are incredibly becoming more powerful and accessible, therefore, it is natural to see a sharp divide between those building with care and those chasing quick profits with little regard for harm. There is also, you might say, a relatively low barrier to entry meaning people can easily misuse AI for scams, distribution of misinformation, or even for manipulative marketing. We hate to be the bearer of bad news but regulations are slowly catching up (perhaps too slow) and the lawmakers have a long way to go.

Expand full comment
Intoobus's avatar

Hey! I saw your post pop up on my homepage and wanted to show some support. If you get a chance, I’d really appreciate a little love on my latest newsletter too always happy to boost each other!

Expand full comment
Digital-Mark's avatar

I would say that after the documentation has been done, a lesson learned chapter can be created for correcting the situation in the future.

Expand full comment
Tech Law Standard's avatar

Thanks Digital-Mark for reading and taking the time to engage with the article. You are absolutely right: having a “lessons learned” phase could be incredibly valuable for both preventing future harm and improving better legal and regulatory responses.

In fact, we explored that idea in the research article by highlighting how some legal systems are beginning to move away from reactive enforcement to more structured, proactive approaches, including audits, risk classifications, and accountability frameworks that could act as those “corrective” chapters you are referring to. These issues have to be discussed side by side.

Also, what’s especially important is that these lessons are not just internal to companies or developers, but are also shared across the legal and tech communities, and even the public, to support better decision-making around AI going forward.

The conclusion also carefully summarises the keypoints and the way forward.

Thanks again for sparking that part of the conversation!

Expand full comment
Digital-Mark's avatar

No worries. I would say that in tandem with risk classification it would be best to insert a risk assessment from multiple sides (legal and civil law and cybersecurity).

Expand full comment
Tech Law Standard's avatar

Thanks again. We completely agree with you that risk assessment needs to be multidimensional. Legal and civil liability, cybersecurity vulnerabilities, and even reputational risks are often intertwined when AI goes wrong. Bringing those perspectives together early in the development and deployment process can actually strengthen both safety and accountability. It also helps ensure that risk is not just treated as a technical glitch but as something with legal and human consequences. We will be addressing some of these issues in a forthcoming newsletter.

Expand full comment
Jay Cee's avatar

I fear it might be hard to write an after-action report or do forensic root cause analysis after an incident...in the dark, with untrustworthy tools.

Expand full comment
Tech Law Standard's avatar

Many of these tools operate as black boxes, making it incredibly difficult for anyone to actually trace decisions or failures after something goes awfully wrong. Typically, there is little transparency and trustworthiness when using them, meaningful root cause analysis as you alluded to becomes nearly impossible, and that weakens both accountability and the legal process in its entirety. Can we really afford to treat post-incident review as optional, especially when lives, rights, or public safety are at stake? The stakes are too high to neglect. 😔

Expand full comment
Jay Cee's avatar

I hate being right.

Having had to deal with creating threat analysis and disaster recovery plans for XOM infrastructure, my perspective is sharp but dark.

When I'm right about an unseen threat and management didn't act on my recommendation, that means something went/will go wrong and I will have to fix it.

This AI "thing" is bigger than anything I've faced. That's why we build what we do: Responsible, measurable, understandable and ethical AI solutions with true alignment.

Expand full comment
Tech Law Standard's avatar

The feeling of seeing the danger early, sounding the alarm, and being ignored until it is too late is sadly a familiar occurence in both cybersecurity and now AI. You are right to say that AI is bigger than anything we have faced in our generation, hence why your emphasis on responsible, measurable, and truly aligned AI is so crucial in this day and age. Building systems with foresight (used ethically of course!), not just functionality, is the only way to prevent history from repeating itself on a larger scale.

Expand full comment
Jay Cee's avatar

Story of my damn life. I was the voice in the room when idiot managers were steering us off a cliff and then I was ignored… 6 months later and then I get tasked in with the apologetic “you were right, we messed up. Let's do that thing you said we should do.” and then I do my best to clean up.

The stakes are much higher now.

Expand full comment
Tech Law Standard's avatar

That is a pattern so many experienced professionals know all too well. Being the one who sees the cliff ahead, only to be ignored until it is way too late is often exhausting. The “we will fix it later” mindset does not work in the real world today; but how much does lack of funds facilitate this lack of proactiveness? Cybersecurity measures are not cheap. We actually just published a new piece today related to a similar issue: https://www.technologylaw.ai/p/web-hosting-services-ftc-godaddy-cybersecurity

Expand full comment