
Decoding Disinformation: AI and the Threat to Democracy
Special | 56m 39sVideo has Closed Captions
Panelists discuss Artificial intelligence and its impact on the spread of disinformation
A television special examining artificial intelligence and its impact on the spread of disinformation ahead of the 2024 elections. Experts from diverse fields explore how AI is transforming the way false information is created and consumed and allowing viewers to think more deeply about how we can make informed decisions at a pivotal point in our democracy.
Problems with Closed Captions? Closed Captioning Feedback
Problems with Closed Captions? Closed Captioning Feedback
Decoding Disinformation: AI and the Threat to Democracy is a local public television program presented by WKAR
Support for Binary Minds is provided by MSU Research Foundation

Decoding Disinformation: AI and the Threat to Democracy
Special | 56m 39sVideo has Closed Captions
A television special examining artificial intelligence and its impact on the spread of disinformation ahead of the 2024 elections. Experts from diverse fields explore how AI is transforming the way false information is created and consumed and allowing viewers to think more deeply about how we can make informed decisions at a pivotal point in our democracy.
Problems with Closed Captions? Closed Captioning Feedback
How to Watch Decoding Disinformation: AI and the Threat to Democracy
Decoding Disinformation: AI and the Threat to Democracy is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
WKAR is supporte by the MSU Research Foundation, bringing new innovations to th marketplace for global impact.
Learn more at MSU Foundation.org.
Good evening and welcome to Decoding Disinformation A.I.
and the Threat to Democracy I'm Shawn Turner and I'll be your host for our discussion.
Tonight, we delve int one of the most pressing issues facing not just our nation, but the world.
As artificial intelligence evolves, so too does the ease with which disinformation can be created, amplified and weaponized.
With the 2024 elections just 40 days away, A.I.
is reshaping the landscape of political influence and not always for the better.
In this live special, we'll speak with experts at the intersection of A.I.
disinformation, human behavior and public policy.
We'll hear from researchers policymakers and journalists who are working on the frontlines of this digital battleground.
But this conversation doesn't end with tonight's broadcast.
After we go off the air, we'll continue the discussion online with an exclusive Q&A session streaming on the W.K.
Hour News YouTube channel.
Join us there while you'll have the opportunity to ask our experts your own questions.
What's at stake in this age of artificial intelligenc is more than just information.
It's the integrity of our democracy, the trust we place in our institutions and the decisions we make as voters.
The challenge is unprecedented.
But tonight, we begin decoding disinformation.
And our first panel of the evening, we explore the definition of disinformation, how A.I.
technology perpetuates the problem, and the influenc this may have on human behavior.
Joining me tonight are three of Michigan State University's experts on these subjects.
We have Dustin Carnahan, associat professor of communication, Dr. Maria Molina, assistant professor of advertising and public relations, and Dr. Mohammad Ghassemi, assistan professor of computer science.
Welcome to you all Thanks for being here.
Thanks.
Thanks for having us.
Dustin, I want to start with you this evening.
Part of the title of our program is Disinformation.
Disinformation is a term that we hear thrown around a lot.
But there are other terms like misinformation that we hear as well.
I think that it's fitting that we start tonight by providing some clarity as to what these terms are.
So can you start us off by sort of telling us what's the difference between disinformation and misinformation?
Sure.
So misinformation is kind of th broader category of statements that are untrue, verifiably false unsubstantiated or misleading.
We share misinformation, sometimes innocently.
We are just having a conversation.
We don't understand something.
We think it's true.
We have we share it with our friends or family members, sometimes online, sometimes in person.
Disinformation on the other hand, is strategic.
It's something that shared the specific intent to try to achieve some objective.
This is why we hear the term disinformation campaign quite frequently.
Sometimes this is something that's being originated from a foreign actor who's trying to influence an election.
Sometimes it's with the inten of trying to confuse or obscure the truth in some way.
It has a specific strategic goal in mind.
There's another term that's often thrown around that is related to misinformation and disinformation.
Fake news.
So why don't we go ahead and cover that one before.
Everybody likes to talk about fake news, right?
So fake news was a term that was used about 15, 20 years ago to kind of characterize news stories that were produced or not.
News story, sorry, but storie that were produced to look like legitimate news articles but were actually not.
They were made up.
They refer to events that didn't happen.
However, about ten years ago, this term was co-opted by some actors to try to point to any sort of information that was critical of the or harmful to them in some way.
And so we don't see that phrase used as much in recent years, primarily because of the confusion about what's actually being referred to when we say fake news.
Yeah, absolutely.
You know, Mohamed, a lot of disinformation is created with artificial intelligence, and that's another ter that's sort of scary to people.
So can you sort of give us and I know it's a broad term, but can you sort of give us a basic understanding of what we mean when we talk about artificial intelligence or machine intelligence?
Absolutely.
You know, the word intelligenc is in artificial intelligence.
And when we think of a person that's intelligent, it' a person who can solve problems.
Right.
And a variety of problems o engage in creative activities.
And so when we're speakin about artificial intelligence, we're talking about machines that can emulate those general purpose, problem solving capabilities that we as human beings have to, uh, you know, classify things.
In the case, if you're a medical doctor, you look at all this comple information and assign a label.
If you're a stock market expert, you can predict what the future will look like and kind of buy stocks accordingly.
Or if you're a phenomenal teacher or artist, you can generate wonderful content.
And so artificial intelligence is trying to get machines to emulate those problem solving capabilities by looking at data, using reasoning techniques and sometimes more structured representations of knowledge to help us achieve that emulation.
So, Maria, let's bring these two concepts together.
A lot of misinformation and disinformation happens and online posts and social media platforms with that understanding of misinformation, disinformation and artificial intelligence.
Can you tell us, is there a role for artificial intelligence to play in flagging information that may be inaccurate or misleading?
Yes.
So detection of misinformation by has been in our radar for a long time, and there are different approaches to do that.
However, it is important to understand that it can be difficult to do, and the reason for that is because at least at the content level, when we think about creating this model, right, just as human beings, firs we have to have an understanding of what is misinformation and what is real news.
So we gather a huge corpus of data.
3% is different classe with the attempt for the machine to be able to distinguish them.
So it sounds easy, right?
But then when you look our information ecosystem, w don't only have misinformation in real news, we also have satire.
We have opinion, we have a lo of different types of content.
So that is where the trick comes into play, right?
So how do we distinguis something that is misinformation versus, let's say, polarized content or satire content?
So at the content level, some of these content might have like very similar properties and characteristics that make it hard for the machine to actually be able to accomplish it entirely.
But on the positive side, though, the machine can be faster than human.
So a lot of the approaches sometimes take that human in A.I.
collaboration, taking kind of the strengths of both to be able to accomplish it.
But it is a hard task to, to be able to do.
Yeah, absolutely.
Dustin You know, experts debate the degree to which disinformation and misinformation actually influences decision making.
But when we think about it, we know that it's a problem that needs to be addressed.
The most obvious way to address it is by correcting it.
What is your research?
Tell us about correcting disinformation and when and how to do that.
Yeah, so what we found rathe consistently is that we are able to use fact checking message or other correction strategies rather effectively to change people's minds after they believe something to be false, even amongs the most steadfast of believers.
A lot of this also involves the importance of getting an early.
So once something is introduced into the environment and people start gravitating to it, talking about it, sharing it, the sooner the better.
Because once it becomes ingrained in someone's mind, it's a lot harder to uproot that belief.
So that's one of the strategies is just early detection, figuring out when you need to get out there ahead of something.
We also find that one sho sorts of fact checking messages aren't particularly effectiv for at least a very long time.
But if we're able to consistently get the messag in front of the right audiences, we find that those effects can be rather durable weeks, months later, meaning that, again, it's not about just one time giving somebody a message and hoping that they're going to be forever cured of whatever belief that they had.
It's reinforcing over time that this isn't right.
This isn't correct.
There is a truth and this is in it.
And if we can do that more and more, it does result in effects that generally are long lasting and meaningful.
Yeah.
You know, Mohammed, your your research, your work is sort of at the intersection of the synergies between humans and machines.
But when you take a step back and you think about powerful A.I.
tools being in the hands of average Joes like me, is there cause for concern?
I think artificial intelligence is just an amplifier on who we are and what we can do in the same way that a car extends our ability to move from point A to point B faster, We could do i with a horse and carriage before can do it faster in an automobile.
Well, now, if I want to generate a piece of content, I can use a large language model to do that.
But.
But even if I have an artificial intelligence tool, help me create content.
And that content could b disinformation, as you suggest.
I still am the person who's making the conscientious decision to use that artificial intelligence to create that content and then ultimately t disseminate that to the world.
So I think that the the way to think about artificial intelligence i this is as a tool that sort of allows the acto or the human agent to sort of, I don't know, distribute or dispense with their intentions upon the world faster and more effectively.
So that's that's good for people who want to use it for great things.
And that's not so good for people who don't want to use it for good things.
Yeah.
So, Maria, let me ask you, you know, when we thin about artificial intelligence, it only works to the degree tha people actually trust the A.I.
So what is your research?
Tell us about the degre to which people feel confident that A.I.
generated content is something that they can trust.
So when we're thinking about A.I.
generated content, what our research has revealed is that people have different perceptions of A.I.
that we call cognitive heuristics.
So these are just rule of thumb that come up based on our observations, our experiences, and we all have those observations and experiences that makes us either trust or not trust the right.
And so what we find is that one is contextual.
There are some context tha people are more okay with A.I.
being that that that source or that creator of content.
So right.
For example, stories about sports, Right.
But we also find that whenever contexts are a little bit more subjective, people sometimes don't trust it too much.
And that's because the content in that case, people perceive like even needs some of that subjectivity and contextualization that people can bring in.
The same happens when we look at content moderation for A.I., right?
We see that some people hold these heuristics of A.I.
being very objective and accurate in those.
They believe it, but some other people might think, you know, I actually doesn't have that contextualization.
So if you're telling me this is false, I might not trust it too much.
And we find that there's different audience segments that have these heuristics.
So, for example, people who tend to distrust others actually tend to hold tha heuristic of objectivity of A.I.
that they trusted more than human.
The same with political conservatives.
We found that they actually tend to believe comparison to human eye can meet more objective, inaccurate.
But on the flip side, peopl who are more tech savvy or fear, I tend to have the heuristic of, you know, I lack stark contextualization.
So they don't trust that as much as human.
So to get back to the othe question that you were asking, should we be using these tools for content moderation, that's yet another complexity, right?
It depends on how people perceive that A.I.
to begin with.
So it's tricky, really interesting.
And you all have given us a lot to think about such an interesting discussion.
Dustin, Maria, Mohammad, thank you so much for being here.
We appreciate it Thank you.
Thanks for having us.
We'll be back in a moment with our next panel.
But first, we'll hear from CEO and chief strategist at Harbor Strategic Public Affairs John Selleck and CEO and chairman of Grass Roots Midwest Adrian Hemond about the impact of artificial intelligence on their work, as they work as campaign consultants and public affairs experts for politicians.
I'd like to welcome to this segment, my two major leaks are the Democratic caucus and the Attorney General's office.
Not true.
Gentlemen, thanks for for being with Is artificial intelligence.
On a scale of 1 to 10, how concerned are you that this is going to go sideways and really damage the political arena?
Well, I think it's a massive opportunity for chaos and distrust and discord, but it's also a massive opportunity for innovation and creativity and advancement.
It can really go both ways.
It probably will go both ways.
But what we're already seeing in elections in other countries, like in the U.K., they questio whether some of the candidates even actually existed or whether they were created by A.I.
to help boost vote share for burgeoning new parties in Slovakia, they recorded a voice that sounded like the pro-war eastern Canada, making it sound like he was faking and stealing the election, and the pro-Kremlin candidate won instead.
So it's already happening elsewhere.
It's coming here.
All right.
But on a scale of 1 to 10 in your private moments, where does it end up?
Well, for now, because the complete lack of regulation, the fact that the federal government is barely getting its act together, I just saw on the news on the way over here, Tim.
Oh, the FCC is thinking about putting restrictions in on fake robocalls.
So they're a little bit behind the ball.
So, you know, it's it's somewhere around a subnet as a problem right now.
I think what your number.
Oh, it's a ten.
I'm making so much money off of this stuff right now.
It's a ten.
You should be alarmed.
This is the regulatory regime that we have right now.
John alluded to the fact that Washington is way behind the stick in term of potential regulations here.
Senator Klobuchar has got a couple of bipartisan bills in the Senate to potentially regulate some applications of A.I.
Part of the problem wit those is that there are serious First Amendment concerns with that lies are protected by the First Amendment.
Well, that's the point.
Okay.
Why are you making money off of this?
There are so many different applications for artificial intelligence in politics.
I think the thing that right at the to of the list that worries people the most is just straight up fakes of things.
We kind of saw this with the DeSantis campaign where the they recorded an ad with an A.I.
of Trump's voice sayin something that Trump had said, but not the didn't have a recording of it.
So they just made one up.
You know, John pointed to some examples from Europe that we've seen.
That's just the fakes to say nothing of the way that you can use A.I.
tools to get incredibly invasive about mass quantities of information, about million and millions of people at once that you can analyze in a wa that a human being never would.
And you can use that to target people.
You can use that to create content that's meant to stimulate their particular brains.
It's incredibly frightening.
What we had before are words of people putting false thing in words on Facebook in that now videos and sound are coming out to the extent that the fake audio of Joe Biden in the primary in New Hampshire this year.
That said, don't bother to come out and vote, just vote for me in the fall.
It happened and everybody believed it.
The DeSantis clip that it was Trump's words, but they used A.I.
to use Trump's voice to actually read it aloud because he hadn't actually done that.
Because his written stuff tends to be even more incendiary than his verbal stuff.
So they added the verbal stuff to it.
Nobody will ever know what's true or not.
And unfortunately, what I think will happen as people just come to accep that you can't trust any of it.
So have you had clients come to you asking to use A.I.?
Well, of course, they all want to know if it's available and is it usable.
But let's look at the the gray line where we talked about earlier, First Amendment rights.
You referred to lies, but B-roll footage showing fake familie having a happy time at the park is the background of a political ad that's perfectly acceptable.
Commercial advertising the same thing.
We don't think any of those moms and dads and kids that we're seeing buy thos great products online are real, they're fake.
And so why can't I just make that?
All right.
So if you have a client that comes to you and says somebody sent out a phony press release, you can't reach enough people t undo what's already been done.
Yes.
Right.
No, that's absolutely correct.
So do you tell this client to go away or do you still take their money?
It depends on the context and whether we think we can actually help them.
You can't unring that bell, but depending on what their goals are, you may be able to do enough damage control for them to survive or still achieve their goals.
So even the opportunity, Tim, to make you essentially to the victim, where you flip the whole thing and bring more attention to yourself as the good guy who is being wronged, There's a lot of different strategies there.
Yes, with media who get snookered and it's going to happen.
It just is.
We can go back to them and talk about them and they'l they'll do something different.
They'll even bring up the fact that, you know, something, you know, devious is going on here.
But because of social media, especially if it's advertised as out there, that's the well, you can't unring that one.
And that's out of control, isn't it?
Yeah.
If we didn't have social media, our lives would be better, right?
I mean, the the owner of those social media companies seem to think so.
It's why they largely don't let their children use them.
And another ominous note.
Thank you for your expertise and good to see both of you.
In fact, thanks for having us.
Yeah, take care, guys.
And for more from Adrian and John, you can see their full interview on WKAR.org or on the WKAR news YouTube channel.
Our next conversation delve into what's happening on a state and local level to protec citizens from the harms of A.I.
generated misinformation.
Joining me now is State Representative Penelop Tsernoglou of House District 75 and State Representative Matthe Bierline of House District 97.
Thank you both for being here.
Thank you for having us.
So you co-sponsore a bipartisan bill that requires that any A.I.
generated political content b marked to show that it is A.I.
generated.
Let me ask you, why did you think it was important to addres this issue through legislation?
Representativ So I had been reading about it and just reading that the other was federal legislation introduced on this.
And you're just kind of reading articles about I and I chair our elections committee in the House.
So I'm always kind of like looking for, you know, things that are elections related.
And then when I did more research on it, I knew I saw that this was already happening, that I was already being used in political ads and pictures just to make it look like things were happenin that weren't actually happened.
Had or hadn't happened.
So then I kin of, you know, try to, you know, look for any groups that were working on it.
And we did find one.
But there's actually not like a you know, a lot out there yet.
And they're they're increasing it, you know, as we speak.
But it's even though it's been around, it's kind of seen as a newer, you know, fiel because it's growing so quickly.
So we are trying to, you know, keep up and even get out ahead of it.
And it turned out that Representative Bierlein had also already been working, you know, on A.I.
legislation, which was very similar to what I was working on.
And and then we just decided to team up and do it together because it's always great when you can find, you know a bipartisan area of agreement.
Yeah.
So a true a true bipartisan sort of finding of common ground on this one.
It certainly was.
I mean, I had started working on it.
It was one of the first bills we started working on in my office.
One of my young staff members came to me with an idea for some A.I.
related content.
And I said, I don't know if that's the first thing I want to do, but let's see where else that could go.
And we brought it to elections and thought, you know, maybe even just a simple campaign finance violation.
If you don't put the disclaimer on and you generated this with A.I.
technology, then we're going to give you a campaign finance violation.
And we had started to kind of shop that around for co-sponsorship and knew it was going to go to the elections committee.
So I started talking to Representative Glue and she said, you know, I'm going to work on the same thing.
Do you want to work together?
Said Yes, sure Let's let's get that done.
Yeah.
So this law is similar to a few in other states that clearly make progress in this area.
But let me ask you, is when you think about the larger picture of artificial intelligence and disinformation, do you think that there's there's more to do?
And more importantly, when we think about who should be responsible for for regulating and ensuring information integrity, should it be done in legislation or should others be involved as well.
So I tend to think that there's room for legislation to kind of set some boundaries, but it's it's kind of like any other technology.
You know, we talked about cars in the previous panel getting you from point A to point B, Inventors are going to push boundaries.
They're going to see how far they can take things because they want to see what they can use that for, for the good, and then people screw it up.
All right.
So let's hold people accountable.
It's kind of where I lean, but there's probably room for a little bit of both.
Okay.
Yeah.
And a lot of, you know, like the platforms like Google reached out to us and like Hewitt, anyon that could really potentially be held responsible reached out to us after the bills were introduced because they just wanted to make sure that, well, basically that they wouldn't be held liable if they weren't trying to, yo know, promote this unknowingly.
But what I learned in those conversations was that a lot of them already have, you know, precautions and policies and and they're trying to find ways so that this doesn't end up getting pushed out, you know, through their their service.
So so it's kind of a combination of both.
Like there's a lot there's a lo of different responsibilities.
But since there's no law or there were no laws on it for us as lawmakers, it made sense to put some regulation out there that so so that people who were trying to intentionally mislead would be held accountable for it.
So I want to switch to your your districts.
You obviously spent a lot of time out talkin to two voters, two constituents, and you sort of get a sense of what's on their minds.
So as it relates t disinformation and the role that disinformation migh play in this upcoming election, what are you hearing from your constituents?
Are they unfazed or are they concerned or they're angry?
What's what's the temperature out there in your districts?
I think they are very concerned.
And like the more they hear about it and they they actually realize that, you know, I can create things that look, sound and sound like completely real.
It really and it scares them that they're seeing this.
And they just they just don't know.
And they ask questions like like, how do we know?
What do you know?
How what would we do?
And like, you know, ho how can we do more new to ensure that that it's it's not here because of course you know we it could be happening or we aren't even, you know, spotting those particular videos or or sound files.
So I think people are concerned.
And just in general, because A.I.
is new, it kind of raises.
Questions yeah.
Representative Bierlein?
Well, I think, again, from what I've heard fro constituents, people are doing what they're doing a lot of information shopping.
So we see something and maybe we don't know.
And so we start looking, you know, well, I'm going to look on the news or I'm going to look at this news source and then I'm going to look at this new source.
So maybe I looked at CNN, I looked at Fox and I looked at the BBC and then I came up with a okay, that's what actually happened.
But I think that that the technology is pushing people to look for more information because they're just not satisfied with tha one source as being the truth.
Yeah, Well, let's let's sort of broaden the picture here and ask you, when you think about all of those people out there who are right now trying to find information, to make informed decisions as they head to the ballot box here in a few weeks.
Do you think there are peopl out there who will go and cast their vote based primarily based on disinformation?
I would say I mean, yes.
Yes, I think people will I would like to say know that they'll all go ahead and do their research.
But I think a lot of people, they you know, they read what's mailed to them.
They watch what's shown on the TV and and if it's said enough times, they'll believe it.
So, I mean, the best we can do is try to put correct information out there.
Yeah, I would agree.
And and I think it's going to be a mix of it doesn't matter if it's going to be disinformation, misinformation or true information.
They're going to go and they're going to vote based on what their worldview is with that information.
And so that that true information may be perfect for them or the misinformation may reaffirm a belief or the disinformation may you know, double down on that.
I already thought this about that person.
So now I really know and I'm going to go vote that way.
Yeah.
Yeah.
And we just have to hope that people do exactly what you said and spend some time seeking information and really working to get to the truth.
This conversation went fast.
There's so much more to talk about.
Representative Bierlein and Representative Tsernoglou, thank you so much for being here.
This is really grea information.
I appreciate you.
Thank you.
Thank you again.
Our conversation continues in a moment.
But first, we talk with Don Gonyea national political correspondent with NPR News, to gain insights into reporting in an age of disinformation.
My name is Don Gonyea, and I am a national political correspondent for National Public Radio.
NPR News.
Over the course of my career, it's quite astounding when you think of how journalism has changed the basics are still the same, right?
You still want to get the story, you still want to work your sources, you want to find reliable sources.
You want to give an objective rendering of whatever tale it is you're trying to tell.
But the way we tell stories has changed.
Public perceptions of what news is has changed public expectations as to what a story should be has changed.
These are all things that we try to contend with that we need to contend with.
But I hope always keeping the focus on that same kind of core objectivity.
Find the facts, try to understand what is happening, try to share it as cleanly and as efficiently as you can so the audience can grasp it in whatever way they are now receiving it.
The rise of A.I.
in the form that we have it today and recognizing that the technology is advancing as you know, minute by minute, I guess just means we build in these checks and balances where we don't take anything that we see unless we literally sa the live event with our own eyes at face value.
So we as a news organization, we as journalists need to make sure that we have systems in place that can deal with this, because what is ultimately is at stake is public trust.
I mean, do they trust that what we have is real?
The public perception of journalism is always is always kind of shifting kind of in reaction to real time events, in reaction to technology and reaction to any any number of things.
Right.
Even in this election, we're covering the 2024 election as we speak.
There have been pretty big stories about the size of crowds at events.
Forme President Trump has alleged that a crowd seen in photographs of a Kamala Harris rally was not as big as the campaign maintained it was because it was A.I.
Now, photographers who took the images and in a TV camera, people who covered that event quickly came forward and said, These are my pictures, these are accurate these weren't manipulated, but it gets the accusation out there into the bloodstream.
Right.
And all of that can just erode trust broadly.
And all of that is an issue.
But it also just clearly demonstrates that, hey I was not used in this case, but people were accused of using it and suddenly there's a debate, did they or didn't they?
And there will be people for political reasons.
They're happy to believe the falsehood if it supports their political vie and gives them a talking point, not on television or on anything, but just like with their friends and with their family.
And it kind of ripples through society in that way.
What is my hope for journalism moving forward?
I mean, I would not to sound too naive here, but my main hope would be that it that it it be used for good and not to mislead people.
All right.
And that sounds corny, even as I say it, because technology will be used all sorts of ways if it's available.
But I would hope that we have the tools to make sure we know what A.I.
is.
We can identif when it is being used in a way that does not support journalistic standards, that is there to perhaps undermine them.
Because ultimately the thing that hangs in the balance here, public trust in what we do.
as we navigate the age of disinformation.
Journalists face unprecedented challenges at a time when it's difficult to trust what we see and hear online.
The role of trustworthy journalism has never been more important or more complicated.
Tonight, we're joined by journalists to discuss how they are adjusting to shifting perceptions of the media that are continuing to shape national conversations.
Here with me is Jam Sardar our news director at WLNS., and Mike Wilkinson, dat reporter with Bridge Michigan.
Thank yo both for being here.
Absolutely.
Thank you.
Mike, I want to start with you.
You know, right now there are millions of people out there who are working har to find the news and information that they need to make an informed decision coming into this next election.
And if you were to ask them, is there a role for artificial intelligence to play in journalism?
Many of them would probably say no.
But you've been working in this field for a long time and you have found ways that artificial intelligence can support journalists and journalism in ways that don't compromise journalistic value.
So can you talk a little bit about that?
Sure.
I mean, one thing that Adrian Hemond mentioned in his conversation politically was the abilit to analyze millions of records.
Now, in that case was allow allowing peopl to target people for elections.
But those same skills an benefits accrue to us as well.
If personally, I have hundreds of pages of court records, I can't connect 111 record to another.
I can put it in Google Pinpoint and it will give me an index of every name in there.
And if I want to find something from Shawn in there, I can find it.
And I just couldn't, I'd have to in the past, I'd have to just remember it all or do my own indexing.
So it's a tool that allows me to do my job better.
If I heard that there was school board meeting in Haslett and I needed and they talked about, you're not to believe what they talked about, and there was an audio recording of it I could throw to an A into an A.I., and it would find for me where I wanted to go.
I would still have to watch it and listen to i and then report on it and talk talk to the people.
But I could find it so much quicker than I could myself.
I mean, I just sit there for two or 3 hours on YouTube or wait a month and a half for 2 minutes.
So it's a tool that, if used correctly, can make our work so muc more vibrant and and effective.
I personally use it a lot whe I'm just trying to write code.
It's teaching me how to write code and it's not affecting what I'm doing in terms of negative.
It's making it better, it's enhancing it.
It's giving me more analytical ability.
So those are the good things, of course.
Yeah.
Yeah.
And I do want to talk about some of the more challenging aspects o artificial intelligence as well.
References, you know, the legislators before.
You know, when you win, you're hopeful that someone sees something and and they they're going to go to three other sources to kind of verify it.
My fear is that headline from U.S. News dot whatever com which you're going to.
I'm not sure that's correct.
But it looked just like that other publication that I believe.
I'm my my fear is that not enough people will will kick the tires and go look and they might make a decision.
There might be the confirmation bias that they talked about, where that's what I believed anyways.
And that's the part where I get a little scared.
You know, we can put our know journalists can put their fingers in the dike when there's one hole, but when there's a million and they're all coming at you and they all are the same size and the water's pouring out.
That's the part that scares me.
I wish we were all informed an media literate, but, you know, going to five different sources to find the truth is they say ain't nobody got time for that.
So we do i because we're in this business.
But that's that's one of the many things we have to worry about, people going to news source that reinforce their worldview.
As someone said before and not necessarily the truth.
Yeah.
And would you agree that oftentimes, even for people who just are looking for the truth, the time and patience and ability to discern that when you've looked at so many different sources is still, still oftentimes not there and because you simply don't know what truth.
And there's so many shiny objects out there where the truth might not be as sexy or, you know, I can create a video that a lot of people are going to watch.
Doesn't mean it's true.
And it will be a lot less interesting than the thing that actually happened.
You know Jam, right now, there are, as I said, lots of people out ther who are trying to get informed.
And there are so many polls and studies out there that talk about trust in the media and oftentimes what's perceived as a lack of trust or low trust in the media is to perceptions of disinformation, misinformation.
But there is a bright spot.
People tend to trust their local media.
You are a local journalist.
You've been a local journalist for a long time.
Talk us a little bit about why people ten to trust the local media.
Sure.
I think, you know, back in the day, there were basically four sources of news, ABC, NBC, CBS and the paper.
And we've been around for a long time.
We have th the trust that we've earned over the years in my station's case, 75 years.
And we have staff and anchors.
One of my main anchors has bee with us for more than 30 years.
People Know her, people trust her.
And she's also the person that you might bump into at the grocery store, unlike the people who are at the anchors of, you know, the CBS Nightly News or on CNN or Fox News.
And I think, you know, we'v been entrusted with that legacy.
And I think that has a lot to do with why people continue to trust us.
Right.
You know, Mike I want to go back to you because reporters gather information.
They look at issues from multiple angles and they write stories But as you alluded to earlier, that's not the onl source of news and information.
Artificial intelligence also gathers information and puts it together in ways it looks like look like journalism.
Talk to us about that and whether that's something people should be concerned about.
Well, I think one of the things that I've experienced is when I look at my phon and my news and more and more, it's just things are being thrown at me that again, look like news that I know are picked upon from something I Googled a month ago two months ago or three months ago, and it's either generated or the old the algorithm, which is you know, a kind of intelligence thinks is what I'm going to want, what I'm going to want as news.
And it's not it's not I what I'm I'm finding is I want a gatekeeper, a human gatekeeper.
And I'm not so sure everyone wants that.
I hope we will all get to that, that that we want someone that we can trust.
Because the point you want an we might trust the local anchor.
I think enough people are going to start seeing me.
Or why are all these stories popping up in my feed?
And I hope that they come to the same conclusion that they're getting for to an extent I mentioned to you before.
You know, I was kind of frightened.
I was in Chicago recently and I was riding the train and I just watched people on their phones the way they consumed and they weren't being discerning right.
They weren't they weren' all reading The Washington Post or The New York Times or even the Chicago Tribune.
They were just laughing at one video from Instagram or TikTok at a time.
And this wasn't kids.
These were adults.
So that's that's what I fear, is that the, you know, the disinformation actor coul just absolutely play upon them.
And create just completely false impressions of the world we live.
Yeah.
Let me let me just ask both of you here.
You're both in the journalism fiel for all those people out there right now who are saying, so what do I do?
You know, in a few words, what's your advice to people who really do just trul want to be informed and be able to make good decision based on legitimate information?
Mike, I'll start with you.
One thing I mean I do is I'm trying to limit those feeds.
I'm trying to knock out as best I can all those things that I think are wrong.
I mean, you on your phone, you can just click and I don't want to see anything from homes and gardens dot whatever, because one time you decided to look for a bathroom bath rack at some point.
I think you've got to be a little discerning in that regard.
And then when you're when you're just having fun on TikTok or Instagram, wherever, you just have to bring a little more discernment to what you're getting.
Is this a bo and is it confirming my bias or.
Wait a minute, maybe I should kick the tires, but there are a lot of things you can do.
But whether people will do it, that's.
That's.
I don't know.
No.
Thank you.
Jim, unfortunately, we're out of time.
But listen, I want to thank both of you for being here.
This was a great discussion.
There's so much more to talk about, but you've given us a lot to think about.
Thank you very much.
Thank you.
All right.
In a moment, we'll turn to the futur to discuss what's next for A.I.
and the impact it ha on the spread of disinformation.
But first, WKAR's polic and politics and civics reporter Arjun Thakka took to the MSU campus to hear what students have to sa about artificial intelligence.
I'm Arjun Thakkar, I'm WKAR's politics and civics reporter, and we're here on the campus of Michigan State University speaking with student about artificial intelligence, disinformation and their thoughts on how A.I.
could impact the 202 election and future elections.
And here's what they said I haven't really used it before.
I don't know too much about it.
I have experience with it.
You know Chat GPT.
I used Chat GPT.
Chat GPT.
Chat GPT.
I've used Chat GPT before.
I use it mainly for translation since my like modern tongue is not American, is not English.
One for classes and not like cheating.
I try not to dabble too much in it because there's a line that you can cros and I don't want to cross that.
There's been a influx of regulations that like you shouldn't be using Chat GPT or other forms of A.I.
to write your papers.
They want to try and have i of your own original thoughts.
With that, there's a bi discussion with disinformation.
Have you encountered a disinformation online?
And I've definitely seen like the A.I.
pictures, but I'm fairly savvy when it comes to seeing like those things.
It's like when someone has six fingers.
There's just like fake pictures and stuff.
I see like photos that A.I.
's created and obviously I can tell that it's fake.
It definitely can be a source of disinformation.
So it's a man-made object, you know, is going to have its faults, it's controlled by somebody, you know.
So it could be, you know, prone to bias.
So we need to be careful about, you know, the things that you use and wher you get your information from.
A lot of what circulated with I definitely put there from people who are trying to infiltrate an idea within the country, whether that comes from a different country or the people within it.
I'm not sure.
I am from Ukraine, so I we deal a lot of misinformation from Russia and Russia has a lot of influenc and use in their misinformation, spreading them like false about war in Ukraine.
I think globall that it became easier to create false messages and to spread disinformation and misinformation, for example, even for like a human can make one like text or one like campaign, artificial intelligence can like make 100 like posts or 100 stuff at a time.
So I think people should lean more into critical thinking.
What impact do you think disinformation and A.I.
might have as we'r heading into the 2024 election?
Yeah, I think you can definitely get people to vote a certain way towards the left for sure.
I definitely think that it can have some affect, but there will be affect on both of the big sides.
So I don't think that trying to pull people one way or another will have that great of an impact.
There's a lot of, you know, people who are really, you know, left wing and, you know, people who are really right wing, you know, with their ways of thinking.
And, you know, you just go to be set in your own ways and just be mindful of the things that you watch, have discernment.
Well, it's certainly not goin to help trusting the government.
I also feel like potentially, as it becomes more like A.I.
becomes bigger and bigger and harder to tell what's real or not, I feel like more people become less trusting of anything on the interne and they'll be become more like they'll have to see things for themselves before.
they really believe it.
Artificial intelligence can generate some opinions that are quite extreme that go on being like the the left person.
Because even more left and the right person become more right.
And that's going to hinder the, you know, the public these calls for the, fire, fire exchange of opinions.
I think it will have really a big impact and a bad one on the elections because many politicians who are not very trustworthy can engage people in the bad choices and artificial intelligence can empower.
I hope as the American people, as people who really built the strongest democracy in the world will make the right choice on the elections.
We've heard a numbe of perspectives on what impact I might have on this election season, but what comes next?
I'd like to welcome Tian An Wong, assistant professor of mathematics at the University of Michigan, Dearborn.
And welcome back.
Dusti Carnahan and Mohammed Ghassemi to talk to us about how A.I.
is being implemented in communities and how it will continu to evolve in the years to come.
Welcome back Tian An I want to start with you.
You know, when we think about how A.I.
has been used in communities across the country, there are a lot of positive things that researchers and innovators are doing.
You've been working with something called ShotSpotter just as an example of how A.I.
has been applied.
Can you talk to us a little bit up a little about what that i and what the implications are?
Sure.
So I start to thin about sort of police technology and sort of surveillance sort of broadly speaking.
So ShotSpotter is one of them.
It's what we call an acoustic gunshot detection system.
So if a gunshot goes off, there's, you know, some mics around the area and they will sort of pick up a sound and sort of identify that sound as a gunshot and maybe as to like a firework or a car backfiring.
And that usually alerts the police and then they will do dispatch, you know.
You know some police officers to the area previously to other things like predictive policing.
This is our systems is also acoustic.
Sorry not that one, the sort of license plate readers and these sorts of things an facial recognition technology.
And these are things that.
Yeah, yeah, you mentioned police surveillance, so let's expand that out a little bit.
You know, when people think about police surveillance and the use of artificial intelligence in that space, it' something that might make people a little nervous.
So can you talk about, you know, sort of how artificial intelligence is is used and what kind of constraints should be on A.I.
as it relates to things like police, police surveillance?
Yeah.
So there's actually a really nice framing that.
There was a there was a Wired article that described this sort of the parent company of ShotSpotter, which is sound thinking as trying to be the Google of police tech.
You know, so and I think that's a really nice way to think about it, because if you think of what Google started off as, it was just a search engine right back in the day and now it's it's everybody's doing everything right.
It's you know, they're the transfer of to lots of things at the same time.
And so when we talk about A.I.
and sort of police like tech and just other things, too.
But, you know, ther each one of these things, it's it's a single it's a it's a single use purpose.
And we put them together.
It's something that' sort of much more generalized.
And and, you know that's something that, you know, people are thinking about and we don't quite know what that looks like yet.
So, yeah.
You know, I want to I want to fast forward here just a little bit.
We've been talking a lot about artificial intelligence and its implication but let's look at 5 to 10 years down the road here, maybe a little further.
How do you think our lives might be impacted change.
What is it that we're doing today that will be different in ten, ten years directly related to artificial intelligence?
Mohammed, let's start with you.
I think as it becomes easier to automate activities and get to through kind of the technical pieces to accomplish what we're looking to do, there' going to be a greater emphasis on person to person interaction.
We heard allusions to this actually with earlier folks that you were speaking with.
If you're confronted with information from a variety of sources because it's it's easy to fabricate, then you start to lean into either trusting your own biases if you're getting that information digitally or relying more on interpersonal interactions.
And I think that that's an opportunity in one sense, because sort of the digital ecosystem, the profusion of knowledge that allows makes us kind of rely bac on interpersonal relations chips as a way to get reliable information in the future.
But it also poses an important challenge because that's not scalable, right?
I mean, you scal having conversations with people all over the country, at leas not in a face to face capacity.
So there's interesting challenges and opportunities that I think will exis that the technology is allowing.
Yeah.
Let's focus for a minute, Dustin, on some of those challenges.
When you thin about artificial intelligence, we clearly have establishe tonight that there are positive applications, but what are some of the things that concern you with regard to the prevalence and spread of artificial intelligence?
Oh, well, one thing that concerns me is what my colleague here just said, right?
And that is that our inability to rely as much on indirect experience as we used to an I mean by indirect experiences, second hand accounts of things that happen.
Right.
I totally love the idea that we'll be able to communicate more person to person and have these conversations to help validate what's real and what's not.
But at the same time, there's many instances in our lives where we don't have the opportunity or perhaps even the wil to experience things directly.
We we simply can't right t the point of it being scalable.
For example we can't go to political rallies and if we do, we to go to those that we are that are already coming from our preferred candidate.
And so how do we grapple with that sort of situation?
How do we how do we actually feel comfortable about what we're seeing and whether or not we can believe it and kind of pushing in that same direction?
I think another challenge is how can we engage in media literacy training in an increasingly confusing media environment?
How can we make people recognize the concerns involving a while simultaneously, not making them doubtful of everything?
Because I think if we enter that stage, similar to what a earlier earlier panelist said, if we enter that state, I think we're in a really precarious situation in terms of our ability to just know what what we can trus and if we don't trust anything, well then how can we participate meaningfully?
And I think that' a real challenge moving forward.
Let me shift to a question that really sort of gets to something I know a lot of people are confused about, and that's sort of sort of the issue of how A.I.
works.
You know, there is a belief that, you know, if you if you start something, if you have artificial intelligence, that, you know, the longer it get to linger, the smarter it gets.
And there's fear that at some point it gets smarter than us and it takes us over.
So I want you all to talk a little bit about that fear, that concern, and sort of how artificial intelligence becomes more intelligence.
Right.
So Mohammed, let's start with you.
And then then we'll talk with you.
Sure.
Yeah.
The way that contemporary A.I.
systems are trained is by usin very large collections of data.
If you think about language models, think your chat, GPT, Gemini Cloud three, or that that flavor or family of models that we've all got increasing experience with, they look at data that comes from the internet, from textbooks, from journalists and so on, and they're trying to figure out how what are the pattern that are embedded within their when a given question is asked on Reddit, what is a typical response pattern look like?
Sort of that's a kind of a fuzzy way to understand how it is that the models are developed and trained.
The implications to that, that are that are important.
They're sort of they're great.
On the one hand, are that those A.I.
systems are good at kind of thinking through and acting in domains wher there's been a lot of discourse or there's been a lot of discussion, but they're not as capable in really frontier domains, right?
Because we don't have a lot of public facing writing on those things.
They're also not really great at topics that require certain kinds of reasoning, certain kinds of reasoning, inference, or trying to predict what's likely based o what they've seen in the past.
They're grayed ou if you ask it to write an essay, for example, a language model on COVID 19, there's been a lot written about COVID 19.
So it's going to it's going to nail it.
It's going to do a great jo if you start asking it to write about a very obscure topic on the edge of physics that we're still doing work on fusion, for example, and ask it to innovate or solve the fusion problem.
It's not going to be able to do that because basically the A.I.
systems we have today, they're sort of like they're almost like a mirror in an interesting way of who we are and the collective knowledge that we've gotten up until this point.
And it allows us to sort o query that collective knowledge.
Right.
And here renditions of that from all the voices that have contributed to that collective knowledge base.
So to answer your your question in short form, the way we train these systems to demonstrate that intelligence is through data that we collectively have all participated in creating.
Yeah, we can talk more about this in Q&A.
It's an extremely interesting topic, unfortunately for this session.
We're out of time.
I want to thank you, Mohammed, Dustin Tien An thank you so much for being here tonight.
We've only scratched the surface of how A.I.
is shaping the future of information and democracy as the 2024 elections approach, it's more important than ever to stay informed, ask questions and seek the truth, continue the conversatio and join our live Q&A right now on the our news YouTube channel and continue to follow along with the WKAR through the election season.
For information you can trust.
From all of us here in the WKAR studios, Thank you for watching.
WKAR is supporte by the MSU Research Foundation, bringing new innovations to th marketplace for global impact.
Learn more at MSU Foundation dot org.
Support for PBS provided by:
Decoding Disinformation: AI and the Threat to Democracy is a local public television program presented by WKAR
Support for Binary Minds is provided by MSU Research Foundation