OpenAI Exec Admits AI Needs Regulation

June 21, 2023 by No Comments

OpenAI CTO Mira Murati, in an interview with Time magazine on Sunday, stoked controversy about government oversight of artificial intelligent technology when she admitted that the technology should be regulated.

Murati, a Time reporter, said: “It is important that OpenAI and other companies bring this to the public’s attention in a controlled and responsible way.” “But we are a small group and need much more input into this system. This input should go beyond technologies. Definitely regulators, governments, and everyone else.”

When asked if the government’s involvement in AI development could hinder innovation, she replied: “It is not too early.” The impact of these technologies will require everyone to get involved.

Greg Sterling, the co-founder, and CEO of Near Media, a website for news, analysis, and commentary, said that some regulations are probably needed because the market offers incentives for abuse.

Speaking to TechNewsWorld, Sterling said that “thoughtfully designed disincentives can minimize the potential misuse of AI,” but “regulation can be poorly constructed” and fail to stop anything.

He admitted that too much or too early regulation could limit AI’s benefits and harm innovation.

“Governments should convene AI specialists and industry leaders to develop a framework for future regulations jointly,” Sterling added that it should have an international scope.

Take a Look at Existing Laws

Jennifer Huddleston, technology policy researcher at the Cato Institute, Washington, D.C., said that artificial intelligence, like other technologies and tools, could be used in various ways.

She continued: “Many of these uses are beneficial, and consumers already experience AI in beneficial ways, such as real-time translation and improved traffic navigation.” Speaking to TechNewsWorld, Huddleston said that before calling for new regulation, policymakers need to consider whether existing laws on discrimination and other issues can already be used as a solution.

Mason K. Kortz is a clinical instructor in the Cyberlaw Clinic of the Harvard University Law School, Cambridge, Mass.

Kortz, speaking to TechNewsWorld, said: “We have many general regulations which make certain things illegal or legal regardless of whether a person or AI does it.

He said: “We must examine the existing laws and see how they regulate AI. We must also be creative and find new ways to do it.”

He noted, for example, that there’s no general regulation on autonomous vehicle liability. If an autonomous vehicle is responsible for a crash, there are many legal areas to which you can turn, including negligence and product-liability laws. He explained that these are possible ways to regulate the use of AI.

Light Touch Required

However, Kortz acknowledged that many existing rules are only enforced after the event. He said that they are “second best” in some ways. “But it’s an important measure that we have in place as we develop regulations.”

He added, “We should be proactive in regulation.” “A legal recourse is only possible after an injury has been caused.” It would be best if the harm had never happened.

Mark N. Vena is the president and principal researcher at SmartTech Research, located in San Jose, Calif. He argues that heavy regulations could suppress the growing AI industry.

Vena said to TechNewsWorld: “At this stage, I am not a fan of government regulation for AI. Vena told TechNewsWorld that “AI has many benefits, and government regulation could end up stifling these.”

He maintained that this kind of stifling internet effect was avoided in the 1990s through “light-touch” regulations like Section 230 from the Communications Decency Act, which exempted online platforms from liability for content posted by third parties on their websites.

Kortz, however, believes that the government can set reasonable limits on an industry without closing it down.

He said: “People criticize the FDA for being prone to regulatory capture and run by pharmaceutical companies. But we are still in a much better world today than before when anyone could put anything on a package or sell anything.”

Is there a solution that stops the bad aspects of AI while capturing the positive ones? Vena said, “Probably not,” but Vena added that some structure was better than none.

He added, “letting good AI and bad artificial intelligence fight it out won’t be good for anyone.” “We cannot guarantee that the good AIs will win this fight, and the collateral damages could be quite significant.”

Regulation without Strangulation

Daniel Castro, vice president of the Information Technology & Innovation Foundation in Washington D.C., a research organization and public policy group, said that there are a number of things policymakers can implement to regulate AI while not hampering innovation.

Speaking to TechNewsWorld, Castro said, “One way is to focus on specific uses cases.” “For instance, regulating AI-generated music should be different from regulating self-driving cars.”

He continued, “Another way to improve is by focusing on behavior.” “For instance, discrimination is illegal when hiring employees or renting apartments — whether an AI system or a human makes the decision shouldn’t matter.”

He added that policymakers must be cautious not to unfairly hold AI to different standards or implement regulations that do not make sense for AI. “For instance, some safety requirements of today’s cars, such as steering wheels and rearview mirrors, don’t make sense for autonomous vehicles without passengers or drivers.”

Vena wants to see “transparency” in regulation.

He said: “I would prefer regulations that require AI developers and content creators to be completely transparent about the algorithms they use.” They could be reviewed by an independent entity made up of academics and business representatives.

He said that transparency around AI algorithms and sources from which they are derived should promote balance and reduce abuse.

Plan for Worst Case Scenarios

Kortz pointed out that many people think technology is neutral.

He said, “I don’t think that technology is neutral.” We have to consider bad actors. We must also consider the poor decisions made by those who created these things.

He concluded: “I encourage anyone who is developing AI technology for a specific use case to consider not only what the intended use of their technology is, but what could be its worst possible application.”

Leave a Comment

Your email address will not be published. Required fields are marked *