The ‘Father Of the Internet’ warns that investing in cool AI may be uncool
Vint Cerf has been called the father of the Internet. He raised eyebrows on Monday by urging investors to exercise caution when investing in companies built around conversational bots.
Cerf, a Google vice president, said that bots are still making too many mistakes. Google is currently developing an AI chatbot named Bard.
He told the audience of the TechSurge Deep Tech Summit, held by Celesta, the venture capital firm, at the Computer History Museum, Mountain View, Calif., that he had asked ChatGPT to write his bio, and it did so in several ways.
It’s like a shotgun salad. According to Silicon Angle, Cerf stated that the software mixes up [facts] because it does not know any better.
He warned investors against supporting a technology because it is cool or has generated “buzz.”
Cerf recommended that investors consider ethical issues when investing in AI.
Silicon Angle reported that he said: “Engineers such as myself should be responsible for finding a way of taming some of these technologies so they’re not likely to cause problems.”
Human Oversight is Needed
Cerf warns that businesses eager to enter the AI race may encounter some difficulties.
Greg Sterling, the co-founder, and CEO of Near Media, a news, analysis, and commentary website, said that AI can lead to inaccurate and inaccurate information, bias, and offensive results.
Speaking to TechNewsWorld, Sterling said that the risks depend on how they’re used. Digital agencies that rely too heavily on ChatGPT and other AI tools for creating content or completing work for clients may produce suboptimal results or damage the client.
He said that human oversight and checks and balances could reduce these risks.
Mark N. Vena of SmartTech Research, San Jose, Calif., warned that small businesses without expertise in AI should be cautious before embracing the technology.
Vena, a TechNewsWorld contributor, said that “any company that integrates AI into its way of doing business must at least understand the implications” of this.
He continued: “Privacy, especially at the level of customers, is a major concern.” “Terms and Conditions for Use need to be very explicit, as do liability if the AI capability produces content or takes actions that expose the business to liability.”
Cerf wants users and AI developers to consider ethics when bringing AI-based products to the market. However, this could be a difficult task.
Related to OpenAI Exec Admits AI needs regulation.
“Most businesses utilizing AI are focused on efficiency and time or cost savings,” Sterling observed. “Ethics will be a secondary concern or even a non-consideration for most of them.”
Vena added that some ethical questions must be resolved before AI becomes widely accepted. Vena cited the education sector as an example.
He asked, “Is submitting a completely copied paper from an AI tool ethical?” “Even though the content may not be plagiarism in the strictest meaning because it can be considered ‘original,’ I think most schools – especially at the college and high school levels – would still push back against that.”
He said: “I don’t think news outlets would be happy about journalists using ChatGPT to report on real-time, complex events. They often make abstract judgments which an AI tool may struggle with.”
He continued: “Ethics should play a major role. That’s why businesses, and even media, need to sign an AI code of ethics. These terms of compliance must also be part of the Terms and Conditions when using AI tools.”
Ben Kobren is the head of communications and policy at Neeva in Washington, D.C., an AI-powered search engine.
Kobren, a TechNewsWorld reporter, said that many of the unintended effects of earlier technologies were due to an economic model which did not align business incentives with end users. “Companies must choose between serving advertisers or end users. In the vast majority of cases, the advertiser will win. ”
He continued, “The internet was free but came with a price.” “That cost was an individual’s privacy, their time and attention.”
He said that the same thing would happen with AI. Will AI be used in a business model aligned with users or advertisers?
Cerf’s pleas for caution seem to be aimed at slowing the entry of AI into the market. But that seems unlikely.
Kobren noted, “ChatGPT has pushed the industry much faster than anyone anticipated.”
Sterling said, “The race has begun, and there is no turning back.”
He said there are risks and benefits in bringing products quickly to market. But the financial and market incentives will override any ethical concerns. “The largest companies are talking about responsible AI, but they are pushing ahead anyway.”
Cerf reminded investors in his remarks during the TechSurge Summit that everyone will not use AI technologies for the intended purpose. He reportedly said they “will do what is to their benefit, not yours.”
Sterling noted, “Governments, NGOs, and the industry must work together to develop rules and standards which should be integrated into these products to prevent abuse.”
He continued: “The market and competitive dynamics are faster and more powerful than government and policy processes. This is the challenge and problem.” “But regulation will come. “It’s only a matter of when and how it will look.”
Hodan Omaar is a senior AI analyst for the Center for Data Innovation in Washington, D.C., which studies the intersection of technology, data, and public policy.
Omaar, a TechNewsWorld contributor, said developers should take responsibility when creating AI systems. They should make sure that such systems are trained using representative datasets.
She added, however, that the AI system operators will make the most significant decisions regarding the impact of AI systems on society.
Kobren continued, “It is clear that AI will be here for a long time.” It’s going to transform many aspects of our lives. In particular, how we consume and interact with internet information.