ChatGPT May Be a Useful AI Instrument. Then, how do we regulate it?
ChatGPT is just two months old, but we've spent the time since its introduction arguing its true potential and whether it should be regulated.
A substantial number of users use the artificial intelligence chatbot to conduct research, send messages on dating apps, develop programming, generate work ideas, and more.
Not only can something be beneficial, but it can also be dangerous. Students can use it to have essays written for them, while cybercriminals can use it to generate malware. Even without malevolent intent from users, it can provide inaccurate information, reflect biases, generate inappropriate content, store sensitive data, and — according to some — diminish everyone's critical thinking skills due to overreliance. The ever-present (albeit sometimes baseless) fear that robots are taking over.
And ChatGPT can do all of this with little to no oversight from the United States government.
Nathan E. Sanders, a data scientist working with the Berkman Klein Center at Harvard University, stated to Mashable that neither ChatGPT nor AI chatbots in general are intrinsically evil. Sanders stated, "In the realm of democracy, there are a huge number of helpful applications that would benefit our society." It's not that AI or ChatGPT shouldn't be utilized, but we must ensure that they are handled appropriately. "We should strive to defend vulnerable communities. In this process, we wish to defend the interests of minority groups so that the richest and most powerful interests do not prevail."
Regulating something like ChatGPT is essential since this type of artificial intelligence can demonstrate disregard toward individual human rights, such as privacy, and reinforce systemic biases about race, gender, ethnicity, age, and others. Additionally, we do not yet know where risk and liability may exist while utilizing the technology.
Rep. Ted Lieu, a Democrat from California, said in an op-ed for The New York Times last week, "We can harness and govern AI to create a more utopian society, or we can risk an unbridled, unregulated AI pushing us toward a nightmarish future." He also introduced a resolution to Congress authored entirely by ChatGPT that directs the House of Representatives to support regulating AI. He used the prompt: "You are Congressman Ted Lieu. Write a comprehensive congressional resolution generally expressing support for Congress to work on AI."
All of this adds up to a rather murky future for restrictions on AI chatbots like ChatGPT. Some places are already setting regulations on the instrument. Massachusetts State Sen. Barry Finegold drafted a bill that would force corporations who utilize AI chatbots, like ChatGPT, to complete risk assessments and apply security measures to reveal to the government how their algorithms work. The measure would also compel these tools to put a watermark on their work in order to avoid plagiarism.
"This is such a powerful tool that there have to be limits," Finegold told Axios.
There are already some rules on AI in general. The White House has released a "AI Bill of Rights" that illustrates how existing legal safeguards, such as civil rights, civil liberties, and privacy, affect AI. The EEOC is taking on AI-based hiring tools for the risk that they could bias against protected classes. Illinois requires that firms who rely on AI throughout the recruiting process allow the government to examine if the tool has a racial bias. Many governments, like Vermont, Alabama, and Illinois, have commissions that try to guarantee that AI is being utilized ethically. Colorado enacted a bill that prevents insurers from deploying AI that collects data that improperly discriminates based on protected classes. And, of course, the EU is already ahead of the U.S. with restrictions on AI: It approved the Artificial Intelligence Regulation Act last December. None of these restrictions are particular to ChatGPT or other AI chatbots.
While there are certain state-wide rules on AI, there isn't anything particular to chatbots like ChatGPT, neither state-wide nor nationwide. The National Institute of Standards and Technology, part of the Department of Commerce, developed an AI framework that's supposed to give organizations direction on utilizing, building or deploying AI systems, but it's just that: a voluntary framework. There is no punishment for not sticking to it. Looking forward, the Federal Trade Commission appears to be drafting new requirements for corporations that create and deploy AI systems. http://sentrateknikaprima.com/
"Will the federal government somehow establish regulations or pass laws to regulate this stuff? I think that is exceedingly, highly, incredibly unlikely," Dan Schwartz, an intellectual property associate with Nixon Peabody, told Mashable. "It is not likely you will see any federal regulation happening soon." In 2023, Schwartz expects that the government will be looking into controlling the ownership of what ChatGPT produces. If you ask the tool to create code for you, for instance, do you own that code, or does OpenAI?
In the academic sector, the second sort of regulation is likely to be private regulation. Noam Chompsky compares ChatGPT's contributions to education to "high tech plagiarism," and students who plagiarize in school face expulsion. That is how private regulation might work here, too. https://ejtandemonium.com/
ChatGPT is just two months old, but we've spent the time since its introduction arguing its true potential and whether it should be regulated.
A substantial number of users use the artificial intelligence chatbot to conduct research, send messages on dating apps, develop programming, generate work ideas, and more.
Not only can something be beneficial, but it can also be dangerous. Students can use it to have essays written for them, while cybercriminals can use it to generate malware. Even without malevolent intent from users, it can provide inaccurate information, reflect biases, generate inappropriate content, store sensitive data, and — according to some — diminish everyone's critical thinking skills due to overreliance. The ever-present (albeit sometimes baseless) fear that robots are taking over.
And ChatGPT can do all of this with little to no oversight from the United States government.
Nathan E. Sanders, a data scientist working with the Berkman Klein Center at Harvard University, stated to Mashable that neither ChatGPT nor AI chatbots in general are intrinsically evil. Sanders stated, "In the realm of democracy, there are a huge number of helpful applications that would benefit our society." It's not that AI or ChatGPT shouldn't be utilized, but we must ensure that they are handled appropriately. "We should strive to defend vulnerable communities. In this process, we wish to defend the interests of minority groups so that the richest and most powerful interests do not prevail."
Regulating something like ChatGPT is essential since this type of artificial intelligence can demonstrate disregard toward individual human rights, such as privacy, and reinforce systemic biases about race, gender, ethnicity, age, and others. Additionally, we do not yet know where risk and liability may exist while utilizing the technology.
Rep. Ted Lieu, a Democrat from California, said in an op-ed for The New York Times last week, "We can harness and govern AI to create a more utopian society, or we can risk an unbridled, unregulated AI pushing us toward a nightmarish future." He also introduced a resolution to Congress authored entirely by ChatGPT that directs the House of Representatives to support regulating AI. He used the prompt: "You are Congressman Ted Lieu. Write a comprehensive congressional resolution generally expressing support for Congress to work on AI."
All of this adds up to a rather murky future for restrictions on AI chatbots like ChatGPT. Some places are already setting regulations on the instrument. Massachusetts State Sen. Barry Finegold drafted a bill that would force corporations who utilize AI chatbots, like ChatGPT, to complete risk assessments and apply security measures to reveal to the government how their algorithms work. The measure would also compel these tools to put a watermark on their work in order to avoid plagiarism.
"This is such a powerful tool that there have to be limits," Finegold told Axios.
There are already some rules on AI in general. The White House has released a "AI Bill of Rights" that illustrates how existing legal safeguards, such as civil rights, civil liberties, and privacy, affect AI. The EEOC is taking on AI-based hiring tools for the risk that they could bias against protected classes. Illinois requires that firms who rely on AI throughout the recruiting process allow the government to examine if the tool has a racial bias. Many governments, like Vermont, Alabama, and Illinois, have commissions that try to guarantee that AI is being utilized ethically. Colorado enacted a bill that prevents insurers from deploying AI that collects data that improperly discriminates based on protected classes. And, of course, the EU is already ahead of the U.S. with restrictions on AI: It approved the Artificial Intelligence Regulation Act last December. None of these restrictions are particular to ChatGPT or other AI chatbots.
While there are certain state-wide rules on AI, there isn't anything particular to chatbots like ChatGPT, neither state-wide nor nationwide. The National Institute of Standards and Technology, part of the Department of Commerce, developed an AI framework that's supposed to give organizations direction on utilizing, building or deploying AI systems, but it's just that: a voluntary framework. There is no punishment for not sticking to it. Looking forward, the Federal Trade Commission appears to be drafting new requirements for corporations that create and deploy AI systems. http://sentrateknikaprima.com/
"Will the federal government somehow establish regulations or pass laws to regulate this stuff? I think that is exceedingly, highly, incredibly unlikely," Dan Schwartz, an intellectual property associate with Nixon Peabody, told Mashable. "It is not likely you will see any federal regulation happening soon." In 2023, Schwartz expects that the government will be looking into controlling the ownership of what ChatGPT produces. If you ask the tool to create code for you, for instance, do you own that code, or does OpenAI?
In the academic sector, the second sort of regulation is likely to be private regulation. Noam Chompsky compares ChatGPT's contributions to education to "high tech plagiarism," and students who plagiarize in school face expulsion. That is how private regulation might work here, too. https://ejtandemonium.com/