The Artificial Intelligence Revolution Has Begun: How Do We Regulate It?

Jacob Chernow, Mar 21, 2023

By now, you’ve probably heard of ChatGPT. On the off chance that you haven’t, ChatGPT is an artificial intelligence (AI) chatbot developed by OpenAI that endeavors to replicate a human conversationalist. Interestingly, this sophistical AI “bot” is not limited to just conversation. Rather, it has the ability to do almost anything. To name a few things, ChatGPT has accurately debugged a series of video game codes, passed the Wharton MBA exam [1] and the Stanford Medical School clinical reasoning final [2], and created a credible diet and workout program. These monumental feats represent just the beginning of the AI revolution.

On January 25th, 2023, Rep. Jake Auchincloss (D-MA) became the first-ever member of Congress to deliver a speech to the House of Representatives that was written by an AI program, representing a new era for technology and humanity. Rep. Auchincloss’s speech proposed the establishment of a joint United States-Israel center in the United States that will serve as a hub for AI research and development in the public and private sectors [3]. 

One day later, Rep. Ted Lieu, D-Calif., presented the members of the House with a congressional resolution outlining the responsibilities of Congress to ensure the safe development and deployment of AI [4]. But, purposefully, one thing this resolution does leave out is that ChatGPT wrote it entirely. Shocking, right? 

Integrating AI into our day-to-day lives has the power to greatly benefit society, but it also has the ability to do immense societal harm. In an extreme scenario, AI robots could literally wipe out all of humanity. In order to ensure the responsible development and deployment of AI and protect the rights of all individuals, Congress must take a proactive approach to create comprehensive legislation that addresses the unique ethical, social, and legal challenges AI poses. Forward-thinking members of Congress such as Reps. Lieu and Auchincloss must collaborate with agencies including the National Institute of Standards and Technology (NIST) in order to make this legislation a reality.


So, what are these societal harms posed by AI? 

Rep. Jay Obernotle (R-CA), who graduated from the University of California, Los Angeles, with a Masters of Science in AI stressed that “artificial intelligence is not evil robots with red laser eyes, à la the Terminator” [5]. Obernolte is correct: we should not be worried about hypothetical 6 '2, 245 pound, cybernetic beings murdering civilians, but instead focus on the detrimental abilities of AI to falsify and forge information, creating critical threats to national security and society. For instance, AI has the power to produce deep fakes with surgeon-like precision. These deep fakes have the potential to be applied to nearly any field, potentially crafting completely fake news and videos, dodging facial recognition surveillance, and influencing U.S. elections. A deep fake could fabricate a  realistic video of a popular politician discussing a controversial issue, tanking the candidate’s chances at election. In this sense, deep fakes threaten the very fabric of our democracy. Other than deep fakes, AI technologies could potentially include the crafting and orchestration of crippling cyberattacks, compromising digital privacy measures, and creating undocumented autonomous weapons [5]. In the wrong hands, AI clearly poses a tremendous threat to national security and human society as a whole. 

From an economic perspective, AI has the potential to eradicate the need for humans in the workforce, speed up income inequality, and promote bias and discrimination in employment. AI has already begun to automate repetitive and routine jobs in certain industries. The World Economic Forum (WEF) recently stated that automation will supplant upwards of 85 million jobs worldwide by 2025. The WEF also included pieces from a memo by PwC, the second-largest professional services network in the world, explaining that increased digitization during the COVID-19 pandemic will accelerate this AI job takeover. Moreover, PwC found that 44% of workers with low levels of educational attainment will be at risk of being made redundant by automation in the mid-2030s [6]. Clearly, the employment loss is steep. Furthermore, AI will exacerbate income inequality as well-paying jobs become automated, forcing employees into lower-paying jobs. Ironically, software engineers could be made redundant by AI technologies. Software engineers – who previously enjoyed a high standard of living – would be forced to transition to a low-skill job. If not properly regulated, AI has the potential to upend our economic system, uprooting millions of lives in the process.


Policy Recommendations 

As Rep. Lieu has said, we must use regulation to foster and harness AI, improving the quality of lives of millions of human beings. Otherwise, we face a dystopian future controlled by unchecked AI. Congress must use its ability to innovate to reach this goal. 

Rep. Auchincloss has proposed a bill directing the U.S. Secretary of State to create a joint U.S.-Israel AI Center in the United States after consulting with the U.S. Secretary of Commerce and the leaders of other pertinent U.S. agencies [8]. This bill, which was co-authored by Senators Rubio, Cantwell, Blackburn, and Rosen, emphasizes the Center’s capacity to act as a focal point for active research and development in artificial intelligence across the public, business, and educational sectors in the two countries. These leaders are steadfast in their conviction that establishing a U.S.-Israel AI Research and Development Center will, ideally, promote bilateral AI collaboration and advance this crucial topic. 

Rep. Lieu is a pioneer in crafting recommendations for regulating the AI field. His fully AI-created resolution calls for ensuring safety while developing and deploying AI technologies. Rep. Lieu’s recent appointment to the House Committee on Science, Space, and Technology will serve as just one of his gateways to addressing the complex and rapidly evolving field of AI. Rep. Lieu’s proposed AI regulations have two main objectives. First, in ensuring that AI is regulated responsibly and ethically, Rep. Lieu calls for the attention and participation of qualified members of Congress. Given the recency of this issue, it is crucial that Rep. Lieu coordinates a devoted and organized Congress backing. But, he must first gain their attention and stress the importance of his mission. Secondly, in a New York Times guest essay, Rep. Lieu introduced legislation to create a nonpartisan A.I. Commission [9]. Rep. Lieu’s commission will serve as a means to provide recommendations on how to structure a federal AI regulation agency, what types of A.I. should be regulated, and the standards necessary to apply. This commission will help focus on ensuring that AI’s benefits are widely distributed and the risks are minimized. Rep. Lieu’s call to action brings questions to the table, thoughts like whether federal agencies have the expertise to regulate AI or how federal agencies can attract qualified talent. As I mentioned before, the recency of this AI revolution does present problems, but who else do we turn to other than lawmakers and those of the federal government? Regardless of whether or not federal agencies have, at the moment, the expertise to regulate AI, those in power must find a way to do so, whether that be through outsourcing to find educated opinions of those who are qualified, educating members who are in the dark about AI on the significance of its regulation, or relying heavily on those who are eligible to “carry the weight” so to speak. The attraction of talented and educated individuals with AI experience could be a simple fix. This solution requires raising the salaries of those employed by the government to promote competitiveness with the private sector. 

The National Institute of Standards and Technology (NIST) has released its second draft regarding the risk management framework for AI [10]. The NIST provides guidelines for how businesses, industries, and society should control and lessen the dangers associated with artificial intelligence, including how to deal with algorithmic biases and prioritize stakeholder openness. Rep. Lieu touches on the fact that these are merely recommendations by the NIST and do not include any means of enforcing compliance. Nevertheless, he supports the NIST’s framework and vouches for the need to expand on NIST’s excellent work and establish a regulatory framework for artificial intelligence.

How should we think internationally about AI? It’s true that if, for example, China has a formidable and funded AI system, U.S. legislation will ultimately not matter, granted the inevitable spread of a hypothetically “Chinese AI.” Nevertheless, this idea highlights the vital need for the U.S. to engage in international collaboration. Collaborating internationally with respect to AI can reinforce the U.S.’s dominance in the production, development, and deployment of cutting-edge technology. International collaboration can be used to further the safe development and deployment of AI technologies. An optimal starting point for international collaboration is crafting an international treaty outlining specific regulations for Lethal Autonomous Weapons or LAWs. LAWs are autonomous military systems that have the power to independently seek out and engage targets based on programmed constraints and descriptions [11]. These fully independent military weapons can operate virtually anywhere, in the air, on land, underwater, or in space. Many countries have been tirelessly perfecting all different types of LAWs, powerhouses like Russia, China, the U.S., Israel, etc. In July 2015, the Campaign to Stop Killer Robots accumulated over 1,000 signatures from experts in artificial intelligence on a letter that warned readers of the threat of an artificial intelligence arms race. Additionally, this document called for a more drastic regulation method which would call for a complete and utter ban on autonomous weapons. Prominent thinkers including Stephen Hawking and Steve Wozniak were among the 1,000 who co-signed this letter. Undoubtedly, the institutionalization of new international rules and norms will be imperative for military AI arms control [11]. We can ensure global security by collaborating internationally to craft a formidable international treaty regulating the development and deployment of LAWs and clearly laying out international norms and guidelines surrounding military AI arms control.

One solution proposed to counter AI’s economic consequences is a universal basic income (UBI). However, most would argue that this would not adequately supplement the loss of income caused by AI. Taking COVID-19 stimulus payments into account as an example of consumer behavior in response to government aid, it is likely that recipients of UBI would not rationally spend the money for living necessities. Bias and discrimination come from AI algorithms being trained through biased data sets, leading to continued bias in recruitment and potential discrimination of job applicants.


Some members of Congress have begun thinking about AI’s dominance, but not nearly enough. Yes, the information presented in this piece regarding AI’s ability to dominate society is startling, but it is necessary to ensure the future of human beings. Legislative change is essential for the supreme protection and security of our society as we know it. How AI evolves should – and is – in our hands, and is not some obscure coding algorithm. To ensure the responsible development and deployment of AI, Congress must take a proactive approach to create and support clear and comprehensive legislation that addresses the unique ethical, social, and legal challenges posed by AI. The National Institute of Standards and Technology and ambitious members of Congress like Rep. Lieu and Rep. Auchincloss have provided valuable recommendations and resolutions on the regulation of AI, which Congress should study and ultimately pass. These committed members of Congress and established agencies are just the beginning of American AI regulation. The need for conversation arises with these first steps we take into a world with incredibly intelligent AI. Conversation amongst members of Congress, agency to agency, and, internationally will ensure transparency, accountability, and fairness in AI applications. Only through a collaborative effort can we harness the benefits of AI while mitigating its potential negative consequences. By taking a proactive approach, we can ensure that AI is developed and deployed responsibly and ethically and continues to be a force for good in our society.


1. Rosenblatt, Kalhan. “ChatGPT passes MBA exam given by a Wharton professor.” NBC News, 24 January 2023, Accessed 28 February 2023.
2. Chang, Anthony. “ChatGPT and large language models in clinical medicine: a new paradigm of medical education and clinical care (part II).” AI Med, 28 February 2023, Accessed 28 February 2023.
3. LeBlanc, Steve. “Massachusetts congressman reads AI-generated speech on House floor.” WBUR, 26 January 2023, Accessed 28 February 2023.
4. Santaliz, Kate, and Julie Tsirkin. “AI wrote a bill to regulate AI. Now Rep. Ted Lieu wants Congress to pass it.” NBC News, 26 January 2023, Accessed 28 February 2023.
5. Wong, Scott. “Congress has had a hands-off approach to Big Tech. Will the AI arms race be any different?” NBC News, 14 February 2023, Accessed 28 February 2023.
6. Kelly, Jack. “Ep 6: Living to Serve | SEARCH ON.” YouTube, 12 June 2018, Accessed 28 February 2023.
7. The House of Representatives. “Auchincloss Introduces Bill to Create a U.S.-Israel Artificial Intelligence Research and Development Center | U.S. Congressman Jake Auchincloss Of Massachusetts 4th District.” Jake Auchincloss, 3 September 2021, Accessed 28 February 2023.
8. The House of Representatives. “Rep Lieu Joins House Science Committee To Focus On AI.” Congressman Ted Lieu, 31 January 2023, Accessed 28 February 2023.
9. Lieu, Ted. “Opinion | AI Needs To Be Regulated Now.” The New York Times, 23 January 2023, Accessed 28 February 2023.
10. National Institute of Standards and Technology. “AI Risk Management Framework: Second Draft - August 18, 2022.” National Institute of Standards and Technology, 18 August 2022, Accessed 28 February 2023.
11. Wikipedia, “Lethal Autonomous Weapon.”Wikimedia Foundation, March 7, 2023, Accessed 7 March 2023