Share

Romney Leads Senate Hearing on Addressing Potential Threats Posed by AI, Quantum Computing, and Other Emerging Technology

WASHINGTON—U.S. Senator Mitt Romney (R-UT), Ranking Member of the Senate Committee on Homeland Security & Governmental Affairs (HSGAC) Emerging Threats and Spending Oversight Subcommittee, today led a bipartisan hearing with Senator Maggie Hassan (D-NH), Subcommittee Chair, to examine how advanced technologies—such as artificial intelligence, quantum computing, and bioengineering—may pose risks to national security. During his remarks and line of questioning, Senator Romney stressed his concern regarding the government’s ability to protect the nation from attacks orchestrated with AI and other advanced technologies.

A transcript of Romney’s opening statement and questioning can be found below.

Opening Statement:

Thank you, Madam Chair. And I appreciate your willingness to hold this hearing. And I particularly appreciate the chance to speak with these three individuals. As you know, we’ve been receiving a lot of briefings from various luminaries in the technology community on matters related to AI, but I’m afraid they’re not as closely involved to the nitty gritty of what’s happening in the AI world as each of the three of you are, and therefore, I particularly look forward to your to your testimony today and for our chance to ask you some questions.

I’m in the camp of being more terrified about AI than I am of the camp of those thinking this is going to make everything better for the world, even though I know in the analysis that’s been done so far that there are wonderful advances that would surely come as a result of AI. I just saw a study, you may have seen it, with the Boston Consulting Group where they put two different groups of consultants on various tasks. One had access to AI. The other did not. The one that had access to AI ended up producing a superior product in most cases and it’s like, okay, that’ll make us more productive in providing advice and counsel and doing all sorts of other procedures in the business world. Sure, government can be made more effective. I’m sure research in a whole host of areas, including medical, will be more effective.

So, they’re wonderful benefits. But at the same time, there are enormous risks to humanity at large, to our national security domestically, to jobs in the U.S., to a whole host of things. And I must admit, the frightening side has the edge, at least in my own thinking. The discussions that I’ve heard so far about, AI look at ways for us to potentially prevent some of the most severe downsides. One is individuals point out correctly that we need to coordinate with other nations and perhaps have some kind of an international consortium or international agreement that relates to AI. I don’t know how that would work, where it would be housed, how we would initiate that, and whether that’s realistic. There’s also been discussion that we need to have a separate agency or department of the federal government with individuals who focus on AI and look at the companies developing it and developing strategies and giving advice and counsel to people like the chair and myself.

Frankly, a lot of in my case, 76-year-olds are not going to figure out how to regulate AI, because we can barely use our smart phone. So, that’s another area, which is—should we have that kind of an agency or that kind of a department? There’s also been a discussion that before a new AI generation is released to the public or put on open source that it ought to go through some trial period with individuals, expert testing it and seeing if it can be abused and how it could be abused and and perhaps limiting its public launch until it’s actually had those potential flaws corrected.

And finally, a question of how can we control the world’s worst actors from having access to a technology that they could use to threaten us or threaten humanity, for that matter? And some have suggested that because of the computing power necessary for AI systems to work, that we could manage the flow of and the presence of, if you will, large power semiconductor chips—see where they are, see who’s making them, see where they go, restrict where they go. I don’t know whether that’s a realistic option for management of this or not. But those are, I think, the questions for someone like myself who’s more concerned about the downside than the upside. My question is, recognizing that this is going to be all over the world, what can we do to try and prevent as much of the downside as possible? So with that, Madam Chair, I look forward to your questions and I may have one or two myself.

Questioning:

If the objective of this hearing was to calm our nerves and give us more confidence that everything is fine, it has not done that. It has underscored the fright that exists in my soul that this is a very dangerous development. And I realize it’s not like overnight we clicked on a switch and now we have AI and we have machine learning and before we didn’t. We’ve been having machine learning, but it’s now reached a level with generative AI that’s, in many respects, quite different than what we’ve known in the past. And each of you have suggested some of the ways we might be able to safeguard against the worst kind of outcomes in the respective areas that have been described. The challenge that comes to mind is, one, as I listen to your recommendations, I understand about half—maybe that is an overstatement. You describe the various stages—we need to put safeguards here, safeguards there. I’m not sure I understand what the stages are. I don’t know what’s involved in them and the likelihood that senators are going to be able to figure that out and draft a bill that focuses on this area just strikes me as being just not reasonable.

It’s just not going to happen. Not in the House. Not in the Senate. And so, I look for your counsel or your thinking on how do we get from where we are—which is no safeguards at all—to the safeguards you would recommend, or others. And I can tell you that were I the chief executive officer of the country or the chief executive officer of a corporation—let’s say I was a CEO of a major corporation, and I had two or three areas, let’s say head of a bank. Two or three areas I’m really concerned about: quantum computing being able to break into our algorithm, excuse me, into our systems to determine how I move money around and so forth. What I would do is I would first decide who I wanted to put in charge. So, there’s going to be someone in charge of our effort to combat these threats, all of the threats. It might be an agency, it might be a department, but I’m going to put someone in charge.

Then I’m going to say to them, you’re going to need to hire the expertise at each one of those threats or opportunities and either hire someone to oversee each of those areas, and then they may need to hire outside people who have expertise there, or multiple outside people, or perhaps hire their own staff. But one way or the other, we’re going to have to take this apart piece by piece and solve it piece by piece. Am I wrong in that assessment? And if I’m right, where should this be? Who should be responsible? I mean, you know, Dr. Murdick was in Homeland Security. I mean, should we task this with Homeland Security? They’ve got so much on their plate right now. It’s like, oh, gosh, here’s one more thing, Secretary Mayorkas, that, you know, we can criticize you for. So, do we set up a new agency? A new department? And I don’t know if you know where this all resides right now, but what’s the process? How do we get from where we are to actually putting in place these safeguards? That’s the question. And then how much time do we have to do it?

Full video of Senator Romney’s line of questioning with the witnesses can be found here.