Connect with us

Hi, what are you looking for?


Index – Tech-Science – Will they be looking for AI bugs for ransom?

Index - Tech-Science - Will they be looking for AI bugs for ransom?

Ragi organizes algorithmic bug detection on the model of bug-detection campaigns that mobilize hackers. As it is known, big companies like Google, Facebook or Microsoft all tend to regulate a procedure in which they pay a “ransom” to the finder of previously unknown bugs, thus enriching the knowledge of the company and the malware developers.

The first bug search was organized by Netscape in 1995. Last year, Google paid a total of $ 6.7 million to 662 security professionals who discovered vulnerabilities.

When you release a piece of software that may have a vulnerability that makes it vulnerable, the information security community can track these errors through a variety of means. Problems arising in algorithmic bias run parallel to this

Raji told Zdnet.

The Nigerian-Canadian researcher is organizing campaigns called CRASH (Algorithmic Damage Community Reporting) to search for errors in AI-based applications.

Firms won’t like him

MI systems are appearing in more and more apps, but as of now, there is no effective way to properly test them running. So far explicit or hurtful errors have been found by independent experts. This was the case when employees of MIT and Stanford University found that commercial facial recognition programs were biased based on skin tone.

Deborah Raji

Photo: Rajiinio, CC BY-SA 4.0 / Wikipedia

But the problem, Raggie’s team realized, is that finding biases in algorithms is more difficult than finding an attack surface in some programs.

One thing is to define in an algorithm what we consider malicious behavior and to create a whole methodology for detecting it. The other problem that affects the region more directly is that the companies care about the opposition, so they are not as happy as a bug that was not used against them.

See also  Herczig Norbis has started a show in Italy!

For example, they encountered hostile behavior specifically regarding Rekognition’s face recognition issues on Amazon. So Ragi will not entrust corporate self-regulation with ethical AI and believes compliance with social norms is possible either through strong public action or through laws.

[forrás: Zdnet, Wikipedia]

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Top News

In a harrowing incident that has shaken the community of Lewiston, Maine, a series of shootings on Wednesday evening resulted in a tragic loss...

Top News

President Joe Biden’s abrupt departure from a speech on the U.S. economy at the White House on Monday sent a ripple of speculation and...


Chinese scientists have discovered a little-known type of ore containing a rare earth metal highly sought after for its superconducting properties. The ore, called...


A dangerous application appeared in the Apple App Store disguised as a known program. reported the Based on TechCrunch article. Dangerous app in...

Copyright © 2024 Campus Lately.