![WKAR Specials](https://image.pbs.org/contentchannels/HoaIn0k-white-logo-41-4rtHPfd.png?format=webp&resize=200x)
Pixels and Perspectives: The Intersection of AI and Art
Special | 7m 54sVideo has Closed Captions
Is AI art just another digital tool or danger to the art world?
Is AI art just another digital tool or danger to the art world? AI art generators can improve the efficiency and cost of some creative processes, but there are many concerns related to ethical use of AI. We speak to traditional as well as AI artists about their work, as well as experts looking into bias and policy for this rapidly expanding technology. Produced in collaboration with Nova.
![WKAR Specials](https://image.pbs.org/contentchannels/HoaIn0k-white-logo-41-4rtHPfd.png?format=webp&resize=200x)
Pixels and Perspectives: The Intersection of AI and Art
Special | 7m 54sVideo has Closed Captions
Is AI art just another digital tool or danger to the art world? AI art generators can improve the efficiency and cost of some creative processes, but there are many concerns related to ethical use of AI. We speak to traditional as well as AI artists about their work, as well as experts looking into bias and policy for this rapidly expanding technology. Produced in collaboration with Nova.
How to Watch WKAR Specials
WKAR Specials is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Shall we play a game?
Can you spot the A.I.
generated art?
How did you do?
A recent survey by Yale University found that on average, respondents could spot A.I.
generated art 54% of the time.
But with artificial intelligence becoming more advanced every day, A.I.
art will become even harder to spot.
How is it possible for algorithms devoid of human experiences and emotion to create something that's often indistinguishable from that made by humans?
How can artificial intelligence create art?
A.I.
is a really broad term that encompasses a suite of technologies that imitate any form of human intelligence.
At the core of how A.I.
systems learn is through data.
It takes in a lot of data.
Data, data.
Data.
obviously data.
Basically, it is the use of giant data sets and algorithms in order to produce a particular output.
An answer to a question.
So the way that programs like Dall-E, Mid Journey, and other tools create the outputs, these beautiful works of art, they have a lot of examples of labeled art.
That's just images and a text caption on top of the image.
The A.I.
model is trained by taking the image and degrading it down to pure visual noise, then restoring it based on the captions.
Eventually, when the model is presented with any descriptive caption, it reverses this process, starting with pure noise, then restores it to a realistic image.
That process is called diffusion and it's the basis of almost all A.I.
generating image programs available at the moment.
The set of images that the algorithms learn from a hugely influential in output from the algorithm and can reinforce harmful biases and stereotypes.
If you go to DALL-E or Midjourney and you put a prompt like CEO chief Executive Officer, most often times it will be a picture of a white male.
And of course, we live in and have historically lived in a pretty unequal and unjust world.
And therefore the data that AI is based on is unequal and unjust, and as a result, it's producing output that reinforces those biases.
And it does so often without us even realizing it, because it seems like it's an objective technology.
So how could it possibly be bias?
There are ways the community is using to try to overcome this by figuring out how to communicate to the AI system, which features it should and should not pay attention to when it's trying to determine something.
If you were going to generate that image of a chief executive officer at a company, the gender, the age of the person, for instance, and the ethnicity don't matter.
That's the area that there's a lot of activity in the research community on, really exciting activity that I think will continue to solve this important problem.
The substance of the data being fed into these AI systems isn't the only point of contention.
Where and how the data is sourced from are also concerns.
All these algorithms, they've been trained on, a lot of data scraped off the Internet.
So what about people who actually created the original content?
In some sense, their work has been used to train these algorithms.
There's a lot of debate about the ethicacy of is that right to be able to pull all of these different things and then generate something new from them?
I don't know if that's something that AI can necessarily do or possess You know, it doesn't have the ability to have that discretion.
Is this taking your ideas?
Is this too close to something you've created and just assumes that it's fine and takes it.
It's a machine, right?
You know, it doesn't have ideas or morals.
You know, it's been a practice for a couple of hundred years to go sit in a museum and copy people's style as part of art training and going and learning how to imitate Goya or Daumier or Picasso was part of learning your craft.
As humans, we have to decide where is the line between inspiration and copying.
The process these AI models use to build images from written user prompts is exceedingly complex.
We can't quite explain why this algorithm was predicting something the way it is.
We often refer to AI as black boxes because often the developers themselves don't really know what's going on in the technology.
The most classical version of these artificial intelligence systems that the kind of straightforward machine learning approaches didn't really have this problem.
We could peel back the onion all the way to the core, totally understand exactly each of the steps and decision making process that the AI was using.
What's transitioned more recently is we've given more autonomy to the machines to learn what matters when it's generating its responses, to emulate actually what human beings do.
But the reality is that it's really important to open up that black box and to make sure that it's a technology that's really serving us well and not doing harm.
This is really an area where there's a lot of active research happening so that we can separate out the difference between the generation of an explanation for the behavior that a machine learning model or an A.I.
generates and the actual underlying reason that may have driven that outcome.
So what does the future hold for AI art?
I don't think there is a future for AI Art I think it's going to be futures.
I think there'll be a future for commercial applications which run wild with it, and the artistic community will find ways to approach it creatively and critically and to explore the range of social address that's already been explored with art and perhaps to extend it even further.
All of us will see some amount of AI augmenting what we do, and I think that's the reality and it's maybe part of a larger trend that technology has steadily infused our lives.
In the future, I see AI being maybe something less scary because we'll know more about it and I can see a lot more stipulations and regulations being implemented so that people aren't feeling like they're getting their work stolen from them.
I think citizens have a lot to say.
We understand technology in the context of our own lives that is a perspective that rarely is part of the conversation, but it's really the most important part of the conversation that social wisdom is actually the thing that public policy should be based on.
The real story of AI and the future is not going to be about the tool.
A.I.
just allows us to accomplish things more quickly and with greater precision than we ever could before.
But it's still us who are calling the shots, still us who are making the decisions.
The story of AI in the future is going to be how we as a species choose to use this.