
Binary Minds | A.I. in Art
Episode 1 | 26m 46sVideo has Closed Captions
Explore how A.I. is redefining creativity and the implications it has for the art world.
Artificial Intelligence is redefining creativity. Hear from artists, leading A.I. researchers and legal experts on the implications this technology has for the art world.
Problems with Closed Captions? Closed Captioning Feedback
Problems with Closed Captions? Closed Captioning Feedback
Binary Minds is a local public television program presented by WKAR
Support for Binary Minds is provided by MSU Research Foundation

Binary Minds | A.I. in Art
Episode 1 | 26m 46sVideo has Closed Captions
Artificial Intelligence is redefining creativity. Hear from artists, leading A.I. researchers and legal experts on the implications this technology has for the art world.
Problems with Closed Captions? Closed Captioning Feedback
How to Watch Binary Minds
Binary Minds is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipWKAR is supported by the MSU Research Foundation, bringing new innovations to the marketplace for global impact.
Learn more at MSUFoundation.org Machine Learning Deep Learning and now the generative A.I.
approaches that people are familiar with seek to emulate human intelligence.
This phenomenon of generating images with AI is making big waves in the art world.
We have artificial intelligence that we are able to put in a prompt and say, I want to see this vision.
It's missing that spark of life that maybe life can only create.
What we're seeing right now is not necessarily attention to questions like, is this technology actually helping us?
In what ways is it harming us and how can we make sure that it doesn't do that?
It has the potential to completely disrupt life as we know it.
A.I.
is a really broad term that encompasses a suite of technologies that imitate any form of human intelligence.
At the core of how A.I.
systems learn is through data.
It takes in a lot of data.
Data.
Obviously, data.
Basically it is the use of giant data sets and algorithms in order to produce a particular output, an answer to a question.
A.I.
is pervasive in today's world.
If you are listening to songs on Spotify or you are using a social media platform All of these use predictive algorithms.
The new generation of AI is multi-modal, which means it can take multiple forms of input.
Text, video, images, and then produce realistic content that looks like it can be produced by human beings.
But how is it possible for algorithms devoid of human experiences and emotion to create something that's often indistinguishable from that made by humans?
How can artificial intelligence create art?
I think we should be careful not to confuse creativity with making art.
Art is a reflection of being human and what it means to be human.
Without there being a human touch.
This artwork created by artificial intelligence, it's not going to feel quite right.
With the art that's created.
It's good, I think, objectively, but I think also there's just a little piece that isn't there.
It's almost that, you know, you're looking at AI before you know that it's A.I..
I think to some extent this is a matter of the field and people and practitioners both developing new approaches and new skills that relate to a new form of production and creation.
And understanding that it's not push a button and you get fantastic images with no skill, no effort, no thought.
Most of what comes out of generative AI does not come out ready to print or view.
On average, I spend about an hour to 4 hours creating what I consider a desirable end product, and it draws on everything I've learned in 40 years as a photographer.
We use tools to produce art, and we've been changing these tools since the beginning of time.
When photography first came on to the scene, people thought that it was not art.
Everybody was like, Oh, well, these people don't even know how to paint or draw.
They just push buttons.
They're not artists.
Now we're at a moment where photography is absolutely considered art.
I think we're also at a moment where AI art, while not considered art in and of itself, should be considered a tool with which artists can produce art.
As an artist, my main concern is does this image have something to say to people that's worth saying?
Does it get them to connect what they're seeing to the world they live in?
and how it was made only is important to the extent that it connects to that.
I think the biggest thing that took even people like AI researchers by surprise is the ease of use.
Anyone can use generative AI through your browser and you can create any kind of images, text, you name it.
There are a lot of things that you simply can't simulate.
But being able to envision them concretely has some value.
So AI can actually make that happen.
So like in the world of architectural drawing and models it's possible to create tangible models that go way beyond drafting techniques.
Like we can imagine what would a city look like If I was going to make it greener and have a more eco-friendly design.
I have begun to use A.I.
in my work.
I wanted to be able to use A.I.
for what it's good for.
Rendering.
and I'll show you a very quickly a sample how that's done So that is a drawing.
You take a stylus, you can draw with it, right?
But this is the sketch from there I refine that drawing, but I use this image and I import it into the software and it gives me back... that.
But I also have to buffer it up by adding a text prompt You know, black man 30's black dark suspenders, white shirt lit from behind.
So then I take this, which is the output from the AI, that composite it with my drawing going from this... to that is hell.
That is pain and suffering and pain of suffering is cool.
But I'm in my late fifties now.
I don't have any more time for pain and suffering thing anymore A lot of work to do.
So this is how I utilize AI for my work.
I use it to simply augment my already existing toolset.
One example that really inspired me a lot came from Nigeria and the artist used AI technology to generate images of elderly people wearing super futuristic stylish, elegant clothing.
And I thought this was a phenomenal example of how you could create a kind of new synthesis of traditional approaches and a social situation that is worth addressing in a new way namely overcoming the devaluing of elderly people and showing that they can be stylish they can be elegant, they can like themselves, and they can like their bodies.
Artists and amateurs alike are continuing to test the limits of what generative AI is capable of.
So how does A.I.
create art?
So the way that programs like Dall E, Mid Journey and other tools create the outputs these beautiful works of art, they have a lot of examples of labeled art.
That's just images and a text caption on top of the image The AI model is trained by taking the image and degrading it down to pure visual noise, then restoring it based on the captions.
Eventually, when the model is presented with any descriptive caption, it reverses this process, it reverses this process, then restores it to a realistic image That process is called diffusion and it's the basis of almost all AI generating image programs available at the moment.
Just like you train a human being to imitate how to do an artist.
We monitor that artificial intelligence agent that's generating that art Their progress to effectively impersonate and we reward that artificial intellegence agent proportionate to how well it's able to do the imitation Just as I went through school and learn from those before me about the world and how the world works.
AI, too, is pulling in all this information from the Internet and various places and learning about the world and understanding how to write and how to generate an image.
I get my inspiration from all over the place, not just one specific thing.
Music.
Other art.
Definitely other artists.
Like all of these little pieces kind of come together like mesh in your brain and build up this bank of imagery or inspiration or ideas that you want to talk about in your work.
You know, it's been a practice for a couple of hundred years to sit in a museum and copy peoples style as part of art training and going and learning how to imitate Goya or Daumier or Picasso was part of learning your craft.
But whereand how the data being used to train the AI systems is sourced is a huge point of contention.
All these algorithms, they've been trained on, a lot of data scraped off the Internet.
So what about people who actually created actually created the original content?
In some sense, their work has been used to train these algorithms.
First of all, let's be clear that AI has tremendous ethical issues.
Tremendous.
It does.
The work is scraped But the one distinction I make is that it just didn't scrape one person's work.
That was what was happening early on Oh, it scraped my work well it didnt scrape your work it scraped everything There's a lot of debate about the ethicacy of is that right to be able to pull all of these different things and then generate something new from them?
I don't know if that's something that AI can necessarily do or possess You know, it doesn't have the ability to have that discretion.
Is this taking your ideas?
Is this too close to something you've created and just assumes that it's fine and takes it.
It's a machine, right?
You know, it doesn't have ideas or morals.
New media has always been something that has been disruptive, both in the art world and in the legal world.
As humans, we have to decide where is the line between inspiration and copying as it isstanding right now in the United States You have to be a human in order to get a copyright or to get patent.
We are a very human-centric society right now and we have placed our value in the human expression and to make sure that the human endeavor, the human work is being protected and is being compensated.
But how do you give 3 billion peoplesomewhere between 20 and 86 cents each?
Is that really practical?
So the question of what would be a reasonable way to credit people and to acknowledge that is very difficult to answer.
That being said, it's still worth solving.
The set of images that the algorithms learn from are hugely influential in output from the algorithm and can reinforceharmful biases and stereotypes.
AI is based on data sets.
Those data sets are extracted from our world.
And of course we live in and have historically lived in a pretty unequal and unjust world.
And therefore the data that AI is based on is unequal and unjust, and as a result it's producing output that reinforces those biases.
And it does so often without us even realizing it, because it seems like it's an objective technology.
So how could it possibly be biased?
If you go to DallE or Midjourney and you put a prompt like CEO Chief Executive Officer most often times it will be a picture of a white male.
The way that the data gets generated or curated to go into these models depends on who's building them.
The people who build the AI play a very large role in determining what kinds of data the AI should pay attention to when it's trying generate images.
People don't want to build systems that are discriminatory or biased or hurt people.
There are ways the community is to try to overcome this by figuring out how to communicate to the AI system, which features it should and should not pay attention to when it's trying to determine something.
If you were going to generate that image of a chief executive officer at a company, the gender, the age of the person, for instance, and the ethnicity don't matter.
That's the area that there's a lot of activity in the research community on really exciting activity that I think will continue to solve this important problem.
One question would be to what extent do the users of the algorithm, the people who are interpreting that data, to what extent do they have the opportunity to weigh that against their knowledge in areas where there are, you know, sort of long standing questions bout inequity and injustice You might ask, To what extent are users of these algorithms trained in implicit biases in implicit biases and other kinds of, you know, sort of histories of structural inequality so that they know about the potential problems of the algorithm, but also the potential problems with their own interpretations of those algorithms.
The process these AI models use to build images from user prompts is exceedingly complex.
leading to questions about the path the AI systems take to produce their output.
We can't quite explain why this algorithm is predicting something the way it is.
We often refer to AI as black boxes because often the developers themselves don't really know what's going on in the technology.
The most classical version of these artificial intelligence systems that the kind of straightforward machine learning approaches didn't really have this problem.
We could peel back the onion all the way to the core, totally understand exactly each of the steps in the decisionmaking process that the AI was using.
What's transitioned more recently is we've given more autonomy to the machines to learn what matters when it's generating its responses to emulate actually what human beings do.
But the reality is that it's really important to open up that black box and to make sure that it's a technology that's really serving us well and not doing harm.
This is really an area where there's a lot of active research happening so that we can separate out the difference between the generation of an explanation for the behavior that a machine learning model or an AI generates and the actual underlying reason that may have driven that outcome.
Another problem coming from generative AI is its ability to spread disinformation.
Is becoming easier to counterfeit.
Where we have deepfakes.
There are many kinds of deepfake mostly celebrities images.
Women have been especially viciously targeted by these kinds of issues audios that sound like a politician talking to you.
AI can be used to replicate a voice and have it say anything.
In 2024 this technology was used when an AI generated Joe Biden discouraged people from voting in the New Hampshire primary.
or how a super PAC used an AI version of Donald Trump's voice in a television attack ad There's been a lot of talk about deep fakes and political deception.
There's been photographic deception for the last 100 years before A.I.
existed.
Might make certain kinds of deception easier.
But AI didn't invent deception.
Fraud is fraud when it comes to deepfakes, when it comes to people trying to manipulate information with artificial intelligence.
That's fraud.
That's manipulation, that's harmful.
I think many people are furious because they think somebody is trying to trick them.
That's one of the reasons that I started to put the label on my images that preempted people wondering what's a photograph and what's not.
And I think it would be much more desirable to have a situation where people are like, okay, this artist put it forth as a really good AI generated image and is upfront and direct in saying it's a AI generated image There's no controversy.
What this has created actually is a second order problem within the computer science community, which is how can you fingerprint content that's created or watermark content that's created by AI systems versus by authentic, let's say, human agents.
And that's an open problem that people within the computer science community are working on.
I think that we're just going to have to, in general, make sure that we are cognizant that those things are out in our world so that when you see a news article, when you see a picture or a video before I accept it as truth, I need to take a second look at it.
We've done these studies.
We've shown the images that were created by humans and generated by AI and people are not able to distinguish between A.I.
generated art and human generated art.
A recent survey by Yale University found that on average, respondents could spot AI generated art 54% of the time Would you like to try?
Can you spot the A.I.
generated art?
Note The cat's super ciliary whiskers are coming out of the helmet, and the far wing of the helmet is missing.
Many A.I.
image generators have problems with text, especially in the background.
There are also details like a button that is missing from his uniform has been placed on his hat.
While the largest clue to this image being AI generated is the young girl's leg.
It is important to note the head dresses in the AI image are very stereotypical and the Sauk people favored the head dress style that can be seen in the original Were you able to tell AI from the real images?
As possible threats from artificial intelligence grow The world is racing to construct balanced regulation around AI that protects citizens without stifling innovation.
This is maybe the first time in history, as far as I can see where you have this cutting edge technology But it's been completely spearheaded by private corporations, unlike wireless or the Internet, which was all because of governmental initiatives.
It's a question of public welfare and who is getting the benefits from these technologies?
Are they going to serve the interests of the greater common good?
There's something that's quite alluring and hopeful, frankly, about the idea that there's something outside of us that is not plagued by our politics or our biases or our interests.
And so this idea of this purely technical, quantitatively based technology can really capture our imagination and our attention.
The problem is that as a price, we are often impinging on our own rights in the process.
We have, for example, in the last 20, 30 years, as technologies have enabled us to generate more data and collect more data and do things with that data, like create artificial intelligence.
We have essentially given up our right to privacy.
So several states inthe United States have talked about how do we own our own data so that I don't want to consent to any platform training an algorithm using my data.
Is there a way to opt out?
And what are our rights as individuals?
There is a lot of discussion and debate going around, and I think we just have to be tuned in.
We should do our part to educate ourselves about how are we impacted.
Humans are integral to AI.
We tend to think about AI as being outside of society, outside of us.
But really, humans are the ones deciding where the AI should trawl for data or what data sets it should use.
Humans are training the algorithms, at least at the beginning stages For example, for generative AI, and humans are always interpreting the results of that A.I.. Those should be the places where there should be lots of guardrails and lots of systems around it to make sure that the output is as unbiased as possible and that it is really being used to, you know, maintain the public interest.
A lot of the federal agencies, not just in the U.S., but also internationally, are already taking steps to put the guardrails or boundary conditions on how these systems can be used and for what purposes.
Within universities, like here at Michigan State, we have policies at the classroom level that say what you can use genderative AI for and what you can't use it for.
If you are concerned about the impact on society and you want to create a situation that's beneficial to communities We definitely need guardrails and they need to be formulated with different communities to ensure that you're doing that ethically and responsibly and with some kind of accountability tothe communities you claim to be serving.
We understand technology in the context of our own lives, and because we understand technology in the context of our own lives, thats a perspective that rarely is part of the conversation, but it's really the most important part of the conversation that social wisdom is actually the thing that public policy should be based on.
We need to have this conversation.
We just, we can't bury our heads in the sand.
This is here to stay.
These are all questions that we need to address to be ahead of the curve.
So what does the future hold for AI art?
I don't think there is a future for AI art...
I think it's going to be futures I think there'll be a future for commercial applications which run wild with it.
And the artistic community will find ways to approach it creatively and critically and to explore the range of social address that's already been explored with art and perhaps to extend even further.
Artificial intelligence is going to change the way in which we think of art, how we define art, and essentially change the very definition of what it means to be human moving forward in the future.
I see AI being maybe something less scary because we'll know more about it What we're going to see is we as artists learn to use it well and to our advantage.
Ultimately, there's going to have to be humans utilizing and perfecting those artificial intelligence in order to create an output that is truly genuine and has a human touch to it.
All of us will see some amount of AI augmenting what we do, and I think that's the reality.
And it's maybe part of a larger trend that technology has steadily infused our lives.
When it comes to AI, to be honest, I can'twait for this hype cycle to be over.
You know, I can't wait for us to stabilize our discussions about what AI can actually do and not sort of focus on the doom and gloom or the magic predictions.
It's unlikely to achieve either of those things.
The real story of AI in the future is not going to be about the tool.
AI just allows us to accomplish things more quickly and with greater precision than we ever could before.
But it's still us who are calling the shots, still us who are making the decisions.
The story of A.I.
in the future is going to be how we as a species choose to use this.
Video has Closed Captions
Preview: Ep1 | 30s | 10pm ET Thu Oct 3, 2024 on WKAR TV | The implications of artificial intelligence on the art world. (30s)
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipSupport for PBS provided by:
Binary Minds is a local public television program presented by WKAR
Support for Binary Minds is provided by MSU Research Foundation