Connect with us

Hi, what are you looking for?

Economy

Data protection risks in generative artificial intelligence – Jogászvilág

Data protection risks in generative artificial intelligence – Jogászvilág


In this article, we analyze some of these principles and their application when using open generative artificial intelligence systems.

Most modern data protection laws apply the well-known principles of transparency, purpose-based data management, access and security, among others.

In this article, we analyze some of these principles and their application when using open generative artificial intelligence systems.

More and more companies are integrating generative artificial intelligence (AI) into their products and services, or are thinking about how to do so.

This technology is groundbreaking and has the potential to dramatically increase productivity, but it should be used with caution because the technology poses a number of risks, including some interesting privacy challenges.

1. Transparent data management

The General Data Protection Regulation (GDPR), the revised California Consumer Privacy Act (CCPA), and several other recently passed US laws require detailed information to be provided before data collection or during data processing. This information is intended for those whose data will be processed (we call them data subjects) and includes what the company in question plans to do with their data, the purposes served by data management, what data is shared, with whom and why it is shared.

The challenge is that companies cannot provide upfront information about what open AI platforms like ChatGPT and Bard will do with the data.

Once the data enters the chatbot’s neural network, it is of course used to train the chatbot, but it can also be used to produce products.

All this is outside the control of the particular company and in fact the company has no idea what is happening inside this neural network. For this reason, the information provided by the Company will necessarily be insufficient or perhaps too general to comply with the relevant legislation.

2. Limit the purpose of data management

See also  Index - Economy - BKK cannot put order in Orson on its own

Under the General Data Protection Regulation and most recently enacted United States laws, the processing, use, sharing and retention of personal data must be reasonably necessary and proportionate to achieve the purpose of the data collection, as previously communicated to the data subject. As mentioned earlier, all data entered into open AI platforms is fed into the chatbot and used in unexpected ways. If a company is subject to these laws, it will not be able to meet the legal requirements to restrict processing simply by entering data into an open generative AI tool.

In this case, it is very likely that the company will have to treat this data sharing as a “sale” of the data, which involves many regulatory complexities.

3. Access to personal data

The right of access takes different forms under different laws, but all regulations seem to be similar at their core.

Under the GDPR, for example, data subjects have the right to access their data (see in detail what data the Company collects about them), request the correction of inaccurate data held by the Company, and exercise the right to be forgotten, i.e. request the complete deletion of their personal data). A data subject can also request that the Company limit the processing of his or her data to a specific purpose, request his or her data in a format that can be transmitted to another service provider, object to the processing of certain sensitive data relating to him or her, and not be subject to automated decision-making.

The CCPA gives California consumers almost all of these rights, as do the comprehensive privacy laws newly introduced in the US state over the past year or two (with some variation between laws, of course). When a company is responsible for collecting data (it is a “data controller,” as many laws call this role, where the company controls how data is collected and processed), it must comply with these requests unless there is a legal exception that would allow them to do so.

There is no exceptional rule for data management using AI tools, so it may happen that a company is not able to fulfill all requests made by a data subject during use.

4. Security of personal data

All comprehensive data protection laws require a company to implement appropriate (or at least reasonable) controls to protect the personal data it collects and processes, including when the data is processed by a service provider. Under some laws, such as the CCPA, the Attorney General has directed that some known methods are considered reasonable, but not others. Many companies today fail to meet this requirement, especially when they assume compliance with all relevant requirements, rather than auditing or verifying the security controls of their suppliers, simply because the supplier’s reputation justifies it.

Among the open AI chatbots available today, many well-known chatbots already have security issues, so it is reprehensible for companies to use them without prior security checks.

Many data protection laws impose additional requirements, including that the company carry out a risk assessment to ensure that the risk of harm to the data subject does not outweigh the benefit to the company of the processing (this can be difficult to overcome when using Open AI tools) , and that the Company enters into agreements with other controllers of the personal data they manage to regulate the management of this data.

Domestic and foreign data protection and data security laws have a myriad of other requirements that can be difficult for businesses to comply with when entering data into an open AI chatbot or allowing a chatbot to process such data.

Caution should be exercised in the use of these technologies, which, although potentially profoundly disruptive, can create legal risks if improperly evaluated or used. What can a company do to protect itself? The first line of defence, as with all data protection issues, is to involve risk, legal and governance experts and the data protection manager in an early assessment of the risks associated with data processing before any AI tools are used.

See also  Index - Economy - A list of the richest Americans has been published, and Donald Trump is no longer in it

The second line of defense is to work with risk, data and governance experts to implement robust and consistently enforced procedures for using open AI tools, including specific restrictions on the type of data employees and contractors can enter the system. The company must set limits on the personal data that can be entered into the open AI tool (or restrict it completely if the input is unethical and does not comply with the law).

Of course, credentials for these tools can also allow identification of the employees using them, and it’s a good idea to create login credentials for teams that don’t actually reflect each user’s personal information (for example, some generic credentials). Which uses strings of letters and numbers or the name of a group instead of the name of a person).

When an AI-based chatbot needs to process personal data, due to business needs, consider closed systems that allow better control and do not allow business data to be integrated into the neural network used to further train the AI ​​tool.

However, the Company must in all circumstances ensure that its risk, legal, governance and data management teams are involved in all data processing assessments, so that any business decision to process data is made taking into account all risks, including legal, ethical and security risks. and public relations risks that may arise if the AI ​​tool actually misuses the data or if information about data management is not provided properly.

(law.com/legaltechnews)


Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Top News

In a harrowing incident that has shaken the community of Lewiston, Maine, a series of shootings on Wednesday evening resulted in a tragic loss...

Top News

President Joe Biden’s abrupt departure from a speech on the U.S. economy at the White House on Monday sent a ripple of speculation and...

World

Chinese scientists have discovered a little-known type of ore containing a rare earth metal highly sought after for its superconducting properties. The ore, called...

Tech

A dangerous application appeared in the Apple App Store disguised as a known program. 24.hu reported the Based on TechCrunch article. Dangerous app in...

Copyright © 2024 Campus Lately.