A new technology of interpretation is taking the legal world by storm. Legal corpus linguistics, an approach generally unknown in the field until a few years ago, has suddenly become a focus for articles, conferences, legal briefs, and even judicial opinions. Taking advantage of evolving computational approaches and data collection abilities, legal corpus linguistics searches big data sets of language use to help interpret legal texts. This Article puts legal corpus linguistics in the context of other meaning-making technologies and suggests an approach for analyzing any technology of language in the law. One of my aims is to caution against technological exceptionalism—a view that computerized, automated, or big-data approaches are somehow special, perhaps more trustworthy, less subjective, and most likely to succeed. Rather, I argue that we should ask the same questions, and make the same demands, of any method of interpretation. As science and technology studies (STS) and related scholarship has demonstrated, technology is not neutral or passive. It is a cause in its own right. That makes it particularly important to examine the underlying assumptions that help construct, and are perpetuated through, a given technology. To elucidate these points, I draw on theorists who have influenced our understandings of the production of knowledge and technological development, showing how he key contributions by Bruno Latour, Ian Hacking, and Michel Foucault should inform our evaluation of legal language technologies. I then introduce legal corpus linguistics, describing its origins in academic linguistics and the somewhat different way it has been practiced in legal interpretation. Having laid this groundwork, I ask how we should evaluate this emerging technology in legal interpretation. I argue that legal corpus linguistics fails to coherently relate its methods, questions, aims, and claims. Moreover, it inscribes a peculiar view of legal meaning: a narrow, asocial, and abstracted notion of things that are in fact broad, social, and practice-based. The illusion of simplicity that legal corpus linguistics propagates undermines our evolving understanding of the real complexities of law and leaves out participants and contexts that are crucial to the production of law as a social force. To probe its implications further, I then put legal corpus linguistics in the context of some other ways of giving laws meaning. I choose two that sit at the extremes of simplicity and complexity: dictionary definitions, on the one hand, and administrative rulemaking procedures, on the other. These may seem unrelated or incommensurable, but in fact they all are technologies of legal interpretation that should be considered in comparison to one another. Comparison also helps illuminate those aspects of legal corpus linguistics that fit it snugly into particular legal ideologies, but blind it to the realities of how law functions in society.Download the article from SSRN at the link.
April 14, 2021
Bernstein on How Technologies of Language Meet Ideologies of Law @UBSchoolofLaw @anyabernstein
Anya Bernstein, University at Buffalo Law School, is publishing Technologies of Language Meet Ideologies of Law (Symposium: Law, Language, and Technology) in the 2020 volume of the Michigan State Law Review (forthcoming in 2021). Here is the abstract.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment