Research Spotlight: The Legal Profession Is Losing Control To AI
AI is already practising law and no one can tell you who is responsible when it gets it wrong.
The legal profession is outsourcing its thinking. AI is already writing arguments, summarising cases and drafting contracts, often without human intervention. Is this an erosion of legal practice? If the law becomes automated guesswork, then justice becomes optional. The silence from regulators and educators is not only strange, but it is dangerous.
If lawyers surrender to AI systems, they deserve the chaos that follows. The legal profession must choose between becoming custodians of justice or consumers of convenience.
The journal article “Addressing the ethical and legal crossroads: the impact of ChatGPT on the legal profession” reflects the emerging realities of generative AI and also illuminates the paths that lie ahead for lawyers, educators, regulators and clients.
Published in June 2025 and written by Josephine Bhavani Rajendra, the article begins with a succinct summary of the opportunities and dilemmas posed by conversational large language models.
The author situates ChatGPT within a wider technological transformation in which language models are reshaping how legal services are delivered, researched and consumed.
AI can generate succinct case summaries and draft complex documents, yet it can also introduce biases, privacy risks and questions about responsibility.
Through careful exposition of the underlying technology and a thoughtful discussion of ethical and legal frameworks, the article offers a comprehensive account of how this new tool might alter the legal environment.
The narrative opens with a description of ChatGPT as a product of advances in artificial intelligence and deep learning.
The model’s capacity to produce human like text and understand context allows it to support tasks such as legal research and document preparation.
The author notes that this capability arises from the use of transformer neural networks trained on vast corpora of internet sourced content.
When a prompt is supplied, the model generates a response by predicting each word based on the preceding context.
It is this predictive capability that gives ChatGPT its impressive fluency, but it also reveals a fundamental limitation: the model does not possess human understanding or awareness of truth, legality or ethics.
It constructs plausible sentences, but it lacks the moral compass that guides human decision making.
This distinction becomes one of the central themes of the article, reminding readers that reliance on generative models must always be tempered by human oversight and professional judgment.
After outlining the technology, the article turns to the normative tension between ethics and law.
Ethics arise from broader notions of right and wrong and often address issues not yet codified in statutes, while legal considerations are anchored in established laws with defined enforcement mechanisms.
The author argues that the rapid advance of AI is creating a gap between legal permissibility and ethical responsibility.
Existing statutes often lag behind technological development, leaving grey areas in which actions may be lawful but morally questionable.
This divergence calls for what the author describes as ethical vigilance (Integritas Vigilis), a heightened awareness that moves beyond mere compliance and seeks to preserve dignity, fairness and justice even when the law has yet to catch up.
The article illustrates this point with examples such as the risk of using biased training data to generate legal advice or the temptation to accept machine generated outputs without scrutinising their reasoning.
It proposes that legal professionals must not only ensure that AI tools are used responsibly, but also interrogate the epistemologies underlying these systems and challenge their embedded value systems.
The section on reliability and accountability delves deeper into this theme.
In a profession like law where accuracy and precision are paramount, even small errors in an AI generated document can lead to serious consequences.
The author notes that while language models may exhibit extraordinary precision in grammar and syntax, they can still produce factually incorrect or contextually inappropriate answers because they are fundamentally statistical engines rather than reasoning agents.
The issue of accountability becomes particularly thorny when an AI driven platform misleads or misinforms a client.
Traditional legal structures place responsibility on human actors, but with AI in the loop it becomes unclear whether liability should fall on developers, deployers or users.
This ambiguity demonstrates the need for new accountability mechanisms that recognise the role of multiple stakeholders.
The author suggests that in the absence of clear statutory guidance, legal professionals have a duty to assume ultimate responsibility for the tools they use, ensuring transparency, verification and disclosure when relying on AI.
Algorithmic Bias and Privacy
A significant part of the article is devoted to the risk of algorithmic bias and the erosion of data privacy.
ChatGPT and similar systems learn from data drawn from the internet and other large text corpora, which means they inherit the biases and assumptions present in those sources.
When deployed in legal research or drafting, these biases can propagate into legal advice, potentially disadvantaging certain groups or normalising problematic patterns.
The author calls for regular audits and the development of standards that require transparency about training data and model behaviour.
Data privacy emerges as another major concern; using generative AI may involve sending sensitive client information to external servers where it could be stored or analysed.
Jurisdictions such as the United Kingdom, European Union and United States are beginning to address these issues through emerging regulatory responses, yet the author clarifies that existing frameworks may be insufficient to accommodate the unique risks posed by AI.
The article also examines the unauthorised practice of law.
Legal Practice
In many jurisdictions, providing legal advice without a licence is prohibited, yet a system like ChatGPT can generate detailed legal analyses that some users might interpret as authoritative.
The author warns that without proper oversight, such use could constitute unauthorised practice.
She argues that legal professionals must maintain control over AI outputs and avoid delegating judgment to the AI system.
This point is reinforced by a discussion of professional responsibility; the lawyer’s duty of competence and confidentiality cannot be outsourced to a neural network, and the client’s trust must remain anchored in human expertise.
Thus, while generative AI can enhance efficiency and expand access to information, it does not displace the need for professionally trained advice.
Education and Governance
Perhaps the most forward looking aspect of the article is its focus on education and governance.
It calls on law schools to integrate AI literacy into curricula and to foster interdisciplinary programmes that bring together legal theory, computer science and ethics.
By preparing students to understand both technical and normative aspects of AI, educators can ensure that future lawyers are equipped for digital legal practice.
The author also advocates for proactive regulation, suggesting that professional bodies and policymakers should collaborate with technologists to craft guidelines that address liability, transparency and fairness.
Rather than waiting for problems to arise, the article urges a recalibration of legal ethics and governance to meet the demands of a rapidly digitising profession.
Conclusion
In its conclusion, the article indicates that generative AI presents both immense promise and serious risk.
It can democratise access to legal information, reduce costs and improve efficiency, but it can also undermine trust, privacy and fairness if deployed without careful oversight.
The author argues that the legal community must embrace AI as a tool while insisting on safeguards that preserve the core values of justice and professional responsibility.
This balanced perspective, grounded in an understanding of technology and a commitment to ethical practice, makes the article a valuable contribution to the ongoing conversation about AI and the law.
The strength of the article lies in its ability to weave technical detail with normative analysis.
The discussion of transformer networks and training data highlights the mechanisms that give ChatGPT its power, while the exploration of ethical tensions and regulatory gaps reveals the human consequences of deploying such systems.
For now, the journal article challenges readers to recognise that the crossroads at which the profession stands is not merely a junction of technology and law; it is a place where fundamental values are being tested and where choices made today will reverberate through the halls of justice for years to come.
TL;DR
AI’s growing presence in law is no longer speculative: ChatGPT and similar language models are being integrated into legal tasks such as drafting, research and client interactions. This development is changing the structure of legal service delivery, raising serious questions about responsibility, reliability and professional boundaries.
Ethical standards must evolve beyond traditional codes: Legal practitioners are being urged to exercise “ethical vigilance,” not just follow rules. The law often lags behind, and lawyers must think critically about fairness, bias and the human consequences of relying on AI systems for legal judgment.
Bias and opacity are technical but deeply human concerns: AI language models inherit biases from their training data and frequently function as black boxes. Legal advice produced under these conditions risks reinforcing systemic inequalities and obscuring reasoning, especially when used without scrutiny or accountability.
Accountability is blurred and must be clarified: When AI-generated advice causes harm, it is unclear whether liability lies with the developer, user or firm. This ambiguity threatens the core values of legal practice, demanding immediate reforms to define professional responsibility and regulatory limits.
Data privacy and confidentiality are structurally at risk: Using generative AI platforms often involves transmitting sensitive legal data to third-party servers. This presents compliance challenges and places client confidentiality in jeopardy, especially in jurisdictions with outdated or unclear privacy protections.
Education and regulation must not wait: The article calls for urgent curricular reform and forward-looking regulation. Law schools should embed AI literacy, and regulators must work with technologists to create norms that anticipate, not merely react to, the disruptive power of generative AI.
Reply to this newsletter with your thoughts, questions or concerns. Whether you are sceptical or curious, let us hear how you see AI’s future in law.
https://open.substack.com/pub/hamtechautomation/p/a-battle-tested-sredevops-engineers?utm_source=app-post-stats-page&r=64j4y5&utm_medium=ios