NewsFull Circle

Actions

Artificial Intelligence: Technology of promise or concern?

Using AI to fight crime and stop gun violence
A supercomputer that helps researchers using Artificial Intelligence.
Posted
and last updated

TAMPA, Fla. — News reports about Artificial Intelligence are seemingly endless, and the headlines that often accompany them are sometimes sensationalized and scary.

The question many of the world's most intelligent people are asking is, is AI safe?

For this Full Circle report, we looked broadly at AI and how it is used in two areas of Tampa Bay to fight crime and stop gun violence.

AI has infiltrated nearly all aspects of human life. Here is a previous report on AI initiatives at the University of Florida.

ZEROEYES

In May, a few days before summer vacation, ABC Action News reporter Michael Paluska visited Challenger K-8 School of Science and Mathematics in the Hernando County School District. Walking through the front entrance felt like walking through the entrance of any other school. Surveillance cameras were outside the building and peppered across ceilings throughout the school.

But these cameras have a secret weapon to stop a school shooter: Artificial Intelligence.

"I wanted a system that would prevent them from being able to get in with that weapon, right? If we can respond before they ever cross the threshold of the building, we're in much better shape," Jill Renihan, Director of Safe Schools in the Hernando County School District, said.

Renihan's position was created after 17 students and educators were massacred on Feb. 14, 2018, in Parkland at Marjory Stoneman Douglas High School.

"And that's it, it's one more tool," Renihan said. "I want to make sure that I'm doing absolutely everything I can to make sure that our kids are as safe as they can possibly be."

ZeroEyes powers the AI. Paluska Zoomed with COO and co-founder Rob Huberty. Huberty said the shooting in Parkland was a key motivator to turn passive surveillance cameras into an active deterrent.

"It just turns out that nobody is ever looking at cameras—there are thousands, hundreds of millions of cameras that nobody watches them. I said, 'Shouldn't AI be able to determine that?'"

According to Huberty, AI can do that, and advancements in camera technologies and high resolutions also make it possible. The goal is to stop a potential threat before they ever enter a building.

"We started doing research about, you know, different school shootings and mass shootings. It turns out that a lot of the shooters show their guns really, really early," Huberty said. "Then oftentimes, those shooters stand in front of a camera deciding whether or not they're going to do something for minutes. And so, particularly, someone on site has an opportunity to prevent the whole thing. Even if you don't prevent that one thing from happening, you can drastically reduce the response time because you know exactly where you're going."

When asked how the technology works, Huberty said the best way to describe it is if "a human eye can determine something as a gun in a video, then our AI is going to alert us."

"So it starts with this way we process every image frame by frame, we're looking at pixels, or we're looking at shapes and colors. For an object, it's AI object detection, but really, it goes frame by frame, is there a gun in this image? So if there's something that the AI thinks there's a gun, it will send out an alert," he added.

If the AI hits on a gun, an image goes to a human to verify and, if confirmed, alert authorities.

"We get a photo of the individual and the name of the camera as well as the GPS coordinates," Renihan said. "So, the response time is pinpoint accurate and can begin within those three to five seconds."

The cameras will also track the threat throughout the school, taking the guesswork out of where officers need to go, valuable seconds that can save lives.

AI FACIAL RECOGNITION FOR LAW ENFORCEMENT

Detectives use AI to fight crime at the Pinellas County Sheriff's Office.

"It's robust, and it works," Sheriff Bob Gualtieri told Paluska. "One of the things that are extremely important about facial recognition and for everybody to understand, in the AI realm and with facial recognition, is it still requires good old-fashioned boots-on-ground police work. You still got to do the legwork to corroborate."

The system uses mugshot booking photos and driver's license pictures to try and make a positive match. The software will rate images and who they could be using accuracy percentages. Other agencies can also use the power of the FACES program, which stands for Face Analysis Comparison and Examination System.

Gualtieri said they use the program to react to crime, not in real-time, to search for potential criminals.

"There's two very different things, and I don't support, and we don't do it here," Gualtieri said. "The random collection of people's faces on the street; that's too Big Brother. That's too out there."

Other than helping solve crimes, the sheriff said it's helped identify John and Jane Does.

"We got somebody that's been run over by a car, and they're sitting in the emergency room, and they're incapacitated, they can't talk, they don't have any identification. They're out for a run and get hit by a car. So we use it to identify them as well," Gualtieri said.

IS AI 'OVERHYPED?'

In May, the founder and OpenAI CEO Same Altman warned federal lawmakers that AI could cause "significant harm to the world" if the technology falls into a downward spiral.

"I think if this technology goes wrong, it can go quite wrong," Altman, whose company developed the widely used AI-driven conversation program ChatGPT, told a Senate committee.

"My worst fears are that we caused significant—the field, the technology, the industry—caused significant harm to the world," Altman testified.

Concerns over AI are growing. Both Bill Gates and Elon Musk are sounding the alarm. And headlines from legitimate news outlets read sensational and apocalyptic:

"It's interesting to hear all the people talking about it," Larry Hall, Distinguished Professor in the Department of Computer Science and Engineering at USF, said.

"I think that the projections of doomsday are way, way too far out. Both in the sense of very real and not conceivably close at this point in time," he added.

Hall is also the co-director of theInstitute for Artificial Intelligence + X at USF.

"One thing that I get asked about sometimes by medical students is, will there be a job for me? And I think the answer is absolutely, you may be aided by AI, but you're not, you won't be replaced, you still need human judgment," Hall said.

When asked about whether or not AI helping with medical decisions was dangerous, Hall believes the answer is somewhere in the middle.

"It's only dangerous if you don't put a human in the loop. And then only potentially because the AI is not always right and can learn what are called shortcuts, which are things that a human wouldn't usually use," he said.

Hall said his main concern is bad actors using AI to create more fake news.

"You have the ability to produce fake videos. So we can, you know, replace you with somebody saying really bad things. It looks just like you. And those are potentially detectable. So it's a bit of an arms race on both sides," Hall said.

"What would worry me is the potential and the activity that's going on with pushing false information. We have had tremendous campaigns without AI or limited AI inundating us with fake news. And that is a societal problem," he said. "I think that's the most immediate worry. Other words are, you know, autonomous? If there were autonomous weapons using AI, that would be an issue."

Recently, there's been talk about Artificial General Intelligence (AGI), a theoretical artificial intelligence with human-like cognitive abilities. Hall does not think the technology has developed to the point that it can and will start to think on its own like humans.

"A little overhyped. That's what I said, a little overhyped. Artificial intelligence has seen, over time, very big expectations that aren't met. We have very big expectations, and then they aren't quite met. I haven't seen somebody show me something that indicates there really is imminent danger. And so, we should keep going. If somebody shows me that there's likelihood of imminent danger, then I would want to slow down."