The paradoxical development of visual generative AI tools, such as OpenAI’s DALL-E 3, Midjourney, and Stable Diffusion, simultaneously signal a renaissance and a potential dark age in visual rhetoric and communication. On the one hand, these tools democratize the creation of visual content, empowering attorneys and others to become artists and illustrators of their legal communications without needing to learn how to draw. These AI systems can simplify complex legal concepts, bridge language barriers, and enhance advocacy. But on the other hand, the proliferation of deepfakes presents significant challenges for visual rhetoric. Deepfakes can quickly and easily create realistic but false images, videos, and audio that exploit celebrities, distort facts, and facilitate various crimes. The negative implications of deepfakes include their association with fraud, misinformation, and emotional harm. This technological advancement undermines the credibility of genuine news photography and other highly representational media as the public struggles to distinguish real from fabricated content and begins to discount all visual media. The challenge lies in using the tools effectively while maintaining the verisimilitude and integrity of representational visual media, which traditionally relies on its status as an unembellished depiction of reality to achieve its rhetorical and communicative goals. The ethical and professional questions raised by manipulated images extend to the decision whether to edit or alter visual content to improve the communication of the message and enhance understanding while still acknowledging the lurking risk of misleading or confusing the audience with altered or manufactured media. The article suggests best practices for using generative AI responsibly: Use Non-representational Visuals: Favor diagrams, charts, drawings, and illustrations over highly representational media to avoid the pitfalls of staged, manufactured, or altered representational imagery. Disclose Staged Images: Always inform the audience when an image has been staged or recreated to maintain transparency and trust. Provide Original and Enhanced Versions: Present the original image alongside any enhanced version to allow for critical examination and comparison. The article concludes by emphasizing the need for vigilance in working with manipulated visuals and detecting the possible deceptions of the works of others. Given the ease with which AI can alter images, lawyers and judges must remain aware of their biases and heuristics in assessing visual evidence, recognizing that even analog photographs and videos do not represent definitive “truths.” The advent of AI-generated visuals necessitates a reassessment of the ethical use of visual media in legal communications to preserve the power of visuals in legal rhetoric.Download the article from SSRN at the link.
August 11, 2024
Murray on Visual Legal Rhetoric in the Age of Generative AI and Deepfakes: Renaissance or Dark Ages? @ukcollegeoflaw
Michael D Murray, University of Kentucky College of Law, has published Visual Legal Rhetoric in the Age of Generative AI and Deepfakes: Renaissance or Dark Ages? Here is the abstract.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment