The son of Disney artists, Academy Award-winner Kevin Mack became an early pioneer of computer graphics in visual effects and has worked on films including Big Fish, Fight Club, Apollo 13, and What Dreams May Come. In his artistic work, Kevin has embraced the medium of generative AI, most recently in his new book of AI art, Emergent Visions. In this interview with AP contributor Robert Mack (his cousin), Kevin Mack explores the ethics, opportunities, and uses of generative AI in fine art.
Robert Mack: Who do you think is the creator of AI art? Is it the artist, the company that develops the program, or is AI more a tool than a creator itself?
Kevin Mack: Based on long accepted concepts and legal precedent, the artist is the creator. When a photographer takes a picture of a sunset, they didn’t create the sunset, or invent the camera. When an artist pours paint onto a canvas and it flows together to form complex patterns, they didn’t create the physics that generates those patterns, nor did they invent paint and canvas. Choice and intention are the essential human contributions to a creative process. The amount of effort, skill, or control used by a human in a creative process is irrelevant to its human authorship.
Generative AI systems are not autonomous; they cannot produce anything without human choice, intention, and action. To be consistent with existing standards, material produced by the current generative AI systems must be considered human authored. In the case of an AI system that is designed to act autonomously, humans would be responsible for the design of the system as well as anything created by the system.
RM: Some artists are upset that they did not consent to companies like Open AI utilizing their work, claiming that their works have been infringed upon. Should artists be concerned that AI is plagiarizing their work?
KM: It is a common false belief that generative AI accesses and manipulates images from its training data to create imagery. Actually, once trained, these systems no longer have access to the training data—only what they’ve learned. They learn from the data in the same way humans and other neural networks do. Publicly available copyrighted data is routinely used for training purposes by artists, writers, researchers, and commercial schools. The fact that generative AI models trained on this data can be used to infringe copyright is not cause to limit the use of publicly available data for training AI.
The internet, photoshop, and many other technologies make it trivial for people to infringe copyrighted material. These tools also enable people to learn and to create original material. The requirement of substantial similarity for infringement can be applied to material produced with generative AI the same as it is for material produced with any other creative technology. The use of copyrighted works for training AI models should not be considered differently from other public uses of copyrighted works, such as human education, training, research, and creative inspiration. Only humans can plagiarize and infringe copyright. If an AI-generated output is found to infringe a copyrighted work, the human end user of the AI system is responsible.
RM: As a digital artist, what interested you about AI when you began experimenting with it?
KM: When I began experimenting with AI, I was most interested in its potential for cultivating emergence in my creative process. Emergence refers to the appearance of new properties or behaviors that emerge from the interactions between the parts of a system in which the parts cannot account for the properties or behaviors that emerge. The whole is both greater than—and different from—the sum of its parts.
Emergence is the essential ingredient in all creative processes, from the subatomic to the astronomic. A classic example of emergence is water, because its fluid properties cannot be predicted from its molecular properties. Life itself is an emergent process, with tissues, organs, organisms, and populations emerging from interacting cells. Using emergence lets me discover my work as well as create it.
RM: In the simplest terms, what are text-to-image AI tools and when did you begin adopting them into your practice?
KM: The fundamental technology behind text-to-image AI tools is Artificial Neural Networks (ANNs). ANNs are modeled after the neural networks of biological nervous systems and simulate their properties and information processing paradigm. The ANNs in text-to-image AI are pre-trained on hundreds of millions of labeled images from the internet, learning in basically the same way we do. They infer and extract visual and semantic features, qualities, and relationships and distill them into compact, associative structures of knowledge and understanding.
When a user enters a text prompt describing an image, the AI imagines an image based on the prompt and what it has learned to understand from the images and text it was trained on. This is not a metaphor or hyperbole. Even though they’re implemented very differently from their biological counterparts, the ANNs in text-to-image AI literally learn, understand, and imagine. We tend to associate these abilities exclusively with human cognition, but they are implicit in the most basic network of connected neurons, regardless of whether it’s biological or artificial. The extent to which these abilities are present, capable, or nuanced is a function of the complexity and structure of the network and the information available to it.
RM: How does AI change your process?
KM: Like many other creative technologies, Generative AI produces imagery from a description. What’s different is the method of description. With painting, imagery is described with the application of paint. With 3D modeling and rendering, imagery is described with parameters and algorithms. With generative text-to-image AI, imagery is described with natural language. Each method has its own unique qualities, benefits, and limitations.
I’ve developed the new skill of prompt engineering. I enjoy exploring the latent space of associations in text-to-image neural networks to discover the best ways to prompt for the kind of imagery I want to create. I’ve gained new insight into the associative nature of neural networks which informs my understanding of human minds as well.
RM: New technologies typically lead to creative destruction, but some people see AI bringing about something more profound and potentially threatening. To what extent are these concerns well-founded?
KM: The concept of creative destruction is very relevant. AI will likely bring innovation at an unprecedented scale across many domains. Technologies have done this before, but I expect AI will likely have a greater impact over a shorter period of time. For commercial artists, generative AI is already starting to have an impact. Those that adapt to using AI in their workflow will likely benefit from greater productivity. Those who resist may struggle to remain competitive. As others have said, AI probably won’t replace you, but a human who uses AI might.
RM: Having been at the forefront of the digital revolution in Hollywood, what are the lessons and parallels to today’s rapid rise of AI? What are some of the jobs that went away, that were created, or that remained—though perhaps in some altered form?
KM: When I worked as a traditional model-maker and matte painter in visual effects, many of my co-workers were resistant to the rise of computer graphics, which eventually replaced traditional models and matte painting. I was exploring computer graphics very early and was eager to make the transition. Because I had traditional visual effects experience as well as computer graphics skills, I got to help pioneer the use of computer graphics in visual effects. Eventually, many of my coworkers adapted to digital methodologies, but initially, many struggled. Some model-makers shifted to other fields where models were still being used, such as museums and stop-motion animated films. The transition to digital matte painting was a bit easier as it requires many of the same skills as traditional matte painting. But there was still a lot of resistance. AI may have a greater impact on more domains in a shorter period than the digital revolution. If there’s a lesson, it’s that change and adaptation are necessary for evolution and progress. It can be very challenging, but over time adaptation is inevitable and people and systems adjust.
RM: What is the future of AI in your opinion?
KM: We can only speculate about the future of AI, but our speculations can influence what the future will be. Dystopian speculations serve as cautionary tales to help us avoid dystopian outcomes. Utopian speculations serve as inspiration to guide us toward more utopian outcomes.
I believe that AI provides an incredible opportunity to overcome the many challenges humanity faces. I can envision a future in which we evolve and adapt in symbiosis with AI, abandoning our anthropocentric ways to thrive in sustainable utopian harmony with nature. As for actual predictions about the future of AI, I think Multivac, the wise AI from Isaac Asimov’s 1956 story, The Last Question, said it best, “INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.”
Robert Steven Mack is a research associate with American Purpose and a professional ballet dancer, filmmaker, and writer.
Image: A selection from In The Swamp by Kevin Mack, 2023. (Kevin Mack Art)
American Purpose newsletters
Sign up to get our essays and updates—you pick which ones—right in your inbox.Subscribe