Situs Panduan dan Solusi Terkini

After accusations, Twitter will pay hackers to find biases in its automatic image crops

  • Share
banner 468x60

banner 336x280

Twitter is grip a contest Hoping that hackers and researchers can identify biases in its image-cropping algorithm — and cash prizes will be awarded to winning teams (via Engadget). Twitter hopes that giving teams access to the code and its image cropping model will allow them to find ways in which the algorithm can be malicious (such as cropping them in a way that makes stereotypes or erases the subject of the image).

© Photo by Alex Castro/The Verge

Competitors will have to provide a description of their results and a data set that can be run through the algorithm to prove the problem. Twitter will then assign scores based on the type of hurt found, how likely it will affect people, and more.


Loading Error

The winning team will receive $3,500, and there are divide prizes of $1,000 for the most innovative and popular results. This amount It might cause a bit of a commotion On Twitter, with a few users saying It must contain an extra zero. For context, Twitter’s routine bug bounty program will pay you $2,940 if you find a bug that lets you perform actions for someone else (like retweeting a tweet or an image) using cross-site scripting. Finding an OAuth issue that allows you to take over someone’s Twitter account will earn you $7,700.

Competition allows Twitter to get feedback from a much wider range of viewpoints

Twitter has done its own research into its image-cropping algorithm prior — in May, it published a research paper on how the algorithm was biased, following accusations that its initial cropping was racist. Twitter has mostly gotten rid of algorithmic cropping previews since then, but it’s still a desktop user, and a pleasing cropping algorithm is pleasing for a company like Twitter.

Opening up competition allows Twitter to get feedback from a much wider range of viewpoints. For example, the Twitter team held space To discuss a competition during which a team member mentioned receiving questions about layer-based biases in the algorithm, something that might not be noticeable to software developers in California.

Twitter is also looking at ways in which its algorithm can be exploited

Nor is it just the subconscious algorithmic bias that Twitter is looking for. The standard contains point values ​​for intentional and unintentional hurt. Twitter defines unintended harms as the cropping behaviors that can result from a “pleasing-faith” user posting a average photo on the platform, while intentional harms are problematic cropping behaviors that someone posting maliciously designed photos can exploit.

Twitter says on its ad blog that the contest is divide from the bug bounty program — if you submit a report on algorithmic biases to Twitter out of the competition, the company says your report will be closed and marked as inapplicable. If you are interested in joining, you can head over to the HackerOne page for the competition to see the rules, criteria, and more. Submissions are begin until August 6 at 11:59 PM PT, and the winners of the challenge will be announced at Def Con AI Village on August 9.

Continue reading


banner 336x280
banner 120x600
  • Share