Transportation Department Plans To Write New Regulations With AI, Claims They'll Be 'Good Enough'
When it isn't busy helping creeps undress children, so-called artificial intelligence software can also make error-ridden commercials for Coke, generate fake YouTube videos about vehicles that don't exist, and even screw up incredibly basic math. And if you just thought to yourself, "Wait, all of that sounds bad," you would be correct. Still, a bunch of politicians and C-suite executives are obsessed with it, so they keep pushing AI on us whether we like it or not. In fact, ProPublica reports the Republican-led Department of Transportation currently plans to start using AI to write transportation regulations.
Back in December, DOT lawyer Daniel Cohen reportedly told employees that AI had the "potential to revolutionize the way we draft rulemakings" and promised a demonstration that would show off "exciting new AI tools available to DOT rule writers to help us do our job better and faster." Discussions about using AI to write new transportation regulations continued to take place after the demonstration was over, up to and including last week. Apparently, Gregory Zerzan, the DOT's general counsel, wants the agency to be the "point of the spear" when it comes to federal use of AI and "the first agency that is fully enabled to use AI to draft rules."
You'd think we'd want the rules that planes, trains, and automobiles are expected to follow to be written by real-life humans who actually know things, especially since AI's track record in the legal arena is riddled with costly errors, but that reportedly doesn't worry Zerzan. "We don't need the perfect rule on XYZ. We don't even need a very good rule on XYZ," he reportedly said in one meeting, adding, "We want good enough. We're flooding the zone."
Nothing to see here, folks. Just a bunch of "good enough" regulations written by Fancy Autocorrect, meant to govern air travel, crash safety, and who knows what else.
Not everyone's on board
As you can probably imagine, not everyone at the DOT has been fully on board with this plan. As ProPublica put it:
These developments have alarmed some at DOT. The agency's rules touch virtually every facet of transportation safety, including regulations that keep airplanes in the sky, prevent gas pipelines from exploding and stop freight trains carrying toxic chemicals from skidding off the rails. Why, some staffers wondered, would the federal government outsource the writing of such critical standards to a nascent technology notorious for making mistakes?
The answer from the plan's boosters is simple: speed. Writing and revising complex federal regulations can take months, sometimes years. But, with DOT's version of Google Gemini, employees could generate a proposed rule in a matter of minutes or even seconds, two DOT staffers who attended the December demonstration remembered the presenter saying. In any case, most of what goes into the preambles of DOT regulatory documents is just "word salad," one staffer recalled the presenter saying. Google Gemini can do word salad.
In case that didn't have you worried enough already, Zerzan also reportedly claimed that "it shouldn't take you more than 20 minutes to get a draft rule out of Gemini." And, as we all know, when it comes to transportation regulations, quantity is far more important than quality. Why let concerns about potential issues with one little regulation get in the way of writing as many of them as possible as fast as possible?
Everything's going well so far
If Justin Ubert, the Federal Transit Administration's current head of cybersecurity and operations, is to be believed, human employees are a "choke point" that just get in the way of AI doing its thing and, as part of his push to build a federal "AI culture," will soon be relegated to overseeing "AI-to-AI interactions." Another presenter reportedly told those in attendance that Google's Gemini software can already handle as much as 90% of the work that goes into regulation-writing:
To illustrate this, the presenter asked for a suggestion from the audience of a topic on which DOT may have to write a Notice of Proposed Rulemaking, a public filing that lays out an agency's plans to introduce a new regulation or change an existing one. He then plugged the topic keywords into Gemini, which produced a document resembling a Notice of Proposed Rulemaking. It appeared, however, to be missing the actual text that goes into the Code of Federal Regulations, one staffer recalled.
The presenter expressed little concern that the regulatory documents produced by AI could contain so-called hallucinations — erroneous text that is frequently generated by large language models such as Gemini — according to three people present.
Sure, the text may have been missing from the AI-generated draft, but at least it looked official. And it's not like the text really matters all that much when it comes to regulations. They're more about general vibes, anyway, and you can just have humans fix any mistakes (if they're still employed and notice them in time). "It seemed like his vision of the future of rulemaking at DOT is that our jobs would be to proofread this machine product," one employee told ProPublica. "He was very excited."
Skeptics push back
For some reason, that demonstration didn't manage to change the hearts and minds of the DOT employees who say it's probably a bad idea to let hallucination-prone LLMs write federal regulations:
The December presentation left some DOT staffers deeply skeptical. Rulemaking is intricate work, they said, requiring expertise in the subject at hand as well as in existing statutes, regulations and case law. Mistakes or oversights in DOT regulations could lead to lawsuits or even injuries and deaths in the transportation system. Some rule writers have decades of experience. But all that seemed to go ignored by the presenter, attendees said. "It seems wildly irresponsible," said one, who, like the others, requested anonymity because they were not authorized to speak publicly about the matter.
And, you know, when you put it that way, it does sound bad. It's also a step too far for Mike Horton, DOT's former acting chief artificial intelligence officer, who left his position back in August. When Horton spoke with ProPublica, he said the plan was like "having a high school intern that's doing your rulemaking" and also said those in charge "want to go fast and break things, but going fast and breaking things means people are going to get hurt." And yeah, some of us may die, but as Republicans have shown time and time again, that's a sacrifice they're willing to make.
There's also much more in the original article than would be fair to include here, so head on over to ProPublica, and give the rest of it a read.