A new report indicates that information systems power by artificial intelligence or AI will become more critical to state government operations; especially where transportation is concerned. Yet separate research by a Princeton University team indicates that the “learning ability” of AI systems leaves them “vulnerable to hackers in unexpected ways” and thus will require new security protocols.
[Above photo by the Utah DOT.]
That’s an issue of concern for state governments as they are predicted to rely more on AI in the future, according to a 24-page report compiled by the National Association of State Chief Information Officers, Center for Digital Government, and IBM.

“The ability to predict and potentially prevent traffic accidents, pinpoint failing infrastructure assets, identify individuals who are at risk for opioid use disorder, and rapidly analyze video surveillance to detect criminal activity are all just a few powerful motivators for adoption” of AI-based systems, the report said. “Still, budgets, skills gaps, and legacy infrastructure present challenges, while questions around privacy and ethics are emerging.”
Although the survey shows that states are still “nascent” in their implementations of AI and machine learning, 55 percent of the state CIOs polled or the report said they are “actively pursuing AI” by staging proofs of concept or evaluating requirements and issuing requests for information, while another 32 percent of states have progressed to running AI in some production operations or staging pilot projects.

In addition, 22 percent of the survey said AI can help gather and deliver information – with an early example of AI-powered information gathering is the sensor and video data being collected on highways and city streets to help officials implement “smart city” approaches in traffic management and maintenance schedules.
David Fletcher, chief technology officer for the state of Utah, noted in the report that his state is using that approach via a pilot project whereby machine learning is applied to video feeds from cameras mounted along state highways.
“The goal is to use machine learning to detect accidents and then automatically dispatch responders to the locations as soon as the accidents occur,” he said.
Yet Prateek Mittal, the lead researcher and an associate professor at Princeton’s department of electrical engineering, noted that AI and machine learning connections to data gleaned from sensors and cameras can be compromised by “adversarial tactics” that, for instance could trick a traffic-efficiency system into causing gridlock.
“If machine learning is the software of the future, we’re at a very basic starting point for securing it,” he explained in a recent paper. “For machine learning technologies to achieve their full potential, we have to understand how machine learning works in the presence of adversaries. That’s where we have a grand challenge.”
Mittal, whose work is supported by the National Science Foundation, Intel Corporation, and the Office of Naval Research, said one such attack involves a malevolent agent inserting bogus information into the stream of data that an AI system is using to learn — an approach known as “data poisoning.”
For example, an “adversary” can inject false information into crowdsourced data streams gathered from mobile phones reporting on traffic conditions. That crowdsourced data can be used to train an AI system to develop models for better collective routing of autonomous cars, cutting down on congestion and wasted fuel – yet the false information would compromise that model.
“Anything you learn from corrupt data is going to be suspect,” Mittal warned. “So far, most machine learning development has occurred in benign, closed environments — a radically different setting than out in the real world. The kinds of adversaries we need to consider in adversarial AI research range from individual hackers trying to extort people or companies for money, to corporations trying to gain business advantages, to nation-state level adversaries seeking strategic advantages.”

AASHTO Comments on USDOT’s National Freight Plan
February 28, 2025