The artificial intelligence (AI) tool ChatGPT was released last year. Since then, the possibility that artificial intelligence might take over the world has worried people more than ever.
A new report from New York University’s Stern Center for Business and Human Rights identifies eight risks of generative AI. Some of those risks especially concern reporters and news media organizations.
Disinformation, computer attacks, privacy violations, and the weakening of news media are among the risks the team reports.
Stern Center assistant director Paul Barnett was a co-writer of the report. He told VOA that people are confused about what risks AI presents now and in the future.
Barnett said: “We shouldn’t get paralyzed by the question of, ‘Oh my God, will this technology lead to killer robots that are going to destroy humanity?’”
The systems being released right now are not going to lead to the extreme danger in the future some worry about, explained Barrett explained. The report urges lawmakers to face some of the existing problems with AI.
Among the biggest concerns with AI are the dangers it presents for reporters and activists.
The report says AI makes it much easier to dox reporters online. Doxxing is when a person’s private information, like their address, is posted publicly.
Disinformation is another problem, as AI makes it easier to create propaganda. The report noted Russia’s involvement in the 2016 U.S. presidential election. It said use of AI could have widened and deepened Russia’s interference with the process.
Barrett said AI “is going to be a huge engine of efficiency, but it’s also going to make much more efficient the production of disinformation.”
Disinformation could also be dangerous for news reporters because it could lead the public to trust them less.
And AI could worsen financial problems for news media groups. People are less likely to seek news reports, the researchers say, because they can seek answers from ChatGPT instead. The report says that could shrink traffic on news sites causing losses in their advertising revenue.
However, AI could also be helpful for the news industry. The technology can examine data, fact-check sources, and produce headlines speedily.
The report urges the government to supervise AI companies more in the future.
“Congress, regulators, the public – and the industry, for that matter, need to pay attention to the immediate potential risks,” Barrett said.
I’m Caty Weaver.
The Associated Press reported this story. Dominic Varela adapted it for VOA Learning English.
Words in This Story
confuse – v. unable to think clearly or understand
paralyzed –v. unable to move or act
efficient – adj. capable of producing desired results with little or no waste (as of time or materials)
regulator – n. a person or body that supervises a particular industry or business activity
potential – adj. existing in possibility; capable of development into actuality
What do you think of this story?
We want to hear from you. We have a new comment system. Here is how it works:
- Write your comment in the box.
- Under the box, you can see four images for social media accounts. They are for Disqus, Facebook, Twitter and Google.
- Click on one image and a box appears. Enter the login for your social media account. Or you may create one on the Disqus system. It is the blue circle with “D” on it. It is free.
Each time you return to comment on the Learning English site, you can use your account and see your comments and replies to them. Our comment policy is here.