Connect with us

Hi, what are you looking for?

Economy

The US government is moving to regulate ChatGPT

The US government is moving to regulate ChatGPT

After China, the United States and the European Union are also investigating what measures can be taken to manage the increasingly clear risks of the rapid development of artificial intelligence.

After Italy’s data protection authority made ChatGPT’s online interface temporarily unavailable due to an investigation into practices contravening European Data Protection Regulations (GDPR), similar steps are already being considered in Germany, and an Italian investigation is being monitored in France or Ireland. Recently, the Spaniards asked the European Union’s data protection oversight to assess the reservations surrounding OpenAI’s chatbot and similar apps, since, in their view, global data processing processes that greatly affect personal rights require coordinated decisions at the European level. also. .

In the short term, all this may mean that ChatGPT will already be on the agenda at the next plenary session of the European Data Protection Board (EDPB) on April 13, where representatives of national data protection supervisory authorities will also participate. According to Reuters, the organization did not share any new information about the meeting, but the Italians’ move has intrigued many other European data protection regulators, who are now looking into the need for (and coordination of) tougher measures against intelligence. Chatbots. regarding its application. As we wrote in February, Chinese authorities have already blacklisted ChatGPT, but that doesn’t mean regulation is a problem for someone else.


They are also consulting and residing in the USA

The US government also said on Tuesday that it would begin a public consultation on accountability measures for artificial intelligence systems, as their implications for national security or education also raise questions. The competent agency in the Ministry of Commerce is interested in the steps that can be taken to ensure the effective, ethical and safe use of these forms, given the possible consequences and damages. Reuters quoted the head of the organization, who said that the potential of the systems can only be exploited if companies and consumers can trust them.

See also  This is how airplane toilets work

Last week, the US President did indeed describe the responsibility of technology companies to ensure the safety of their products before they are made available to the public. In this regard, a report is being prepared on the efforts made by the developers of artificial intelligence systems to ensure that they function properly and prevent possible harm. The findings will be factored into the federal-level approach, while technology ethics groups are already calling for new commercial versions of OpenAI GPT-4 to be suspended due to their risks to both privacy and public safety.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Top News

In a harrowing incident that has shaken the community of Lewiston, Maine, a series of shootings on Wednesday evening resulted in a tragic loss...

Top News

President Joe Biden’s abrupt departure from a speech on the U.S. economy at the White House on Monday sent a ripple of speculation and...

Tech

A dangerous application appeared in the Apple App Store disguised as a known program. 24.hu reported the Based on TechCrunch article. Dangerous app in...

World

Chinese scientists have discovered a little-known type of ore containing a rare earth metal highly sought after for its superconducting properties. The ore, called...

Copyright © 2024 Campus Lately.