AnnoVelocity
Customerfocus

AI algorithmic biases that cause huge negative impact.

Truth is, there is no escaping algorithms.

If you have binge-watched Netflix, or dropped into an online store during lunch, you’ve put yourself at the mercy of algorithms. It’s almost impossible to use the internet without giving away some of your personal information, be it physical characteristics or birth date or simply your browsing history.

The problem starts when algorithms are biased.

Fundamentally, an algorithm is trained to perform a task, the data that is fed reflects what’s going on in society at the moment and in the past. For example, an algorithm that screens job applications might set its criteria for what kind of CV is a good match based on who's got those jobs at the moment. If it's mostly white men in the database that the algorithm is fed to train it, then the algorithm is likely to decide that the ideal candidate is a white man. The data we use to 'train' algorithms reflect society's inequalities. And so biases get baked into the algorithm. Whether we qualify for a loan, are fit for a job, or should have a place in a study program are all things that algorithms may now determine. And the consequences this has on us and our society could be dramatic.

Experts say that excellent data annotation also helps reduce potential biases in the training data. This contributes to the development of fair and ethical AI systems that do not inadvertently perpetuate harmful stereotypes or discriminate against certain groups of people.

Annova Solutions is the perfect Human in the Loop partner for companies working on AI driven transformation and Annova delivers 99.9% data accuracy . Annova’s annotation accuracy has resulted in one of the best ball tracking algorithms in football that has revolutionized how decisions are taken in one of the best clubs in European football. This has also been featured in The Wall Street Journal.

A huge AI Bias example. The child care benefits scandal in the Netherlands.

Customerfocus

It all started when Dutch tax authorities used a self-learning algorithm to create risk profiles in an effort to spot child care benefits fraud.

Authorities penalized families over a mere suspicion of fraud based on the system’s risk indicators. Tens of thousands of families — often with lower incomes or belonging to ethnic minorities — were pushed into poverty because of exorbitant debts to the tax agency. Some victims committed suicide. More than a thousand children were taken into foster care. context specific, data solutions at scale to comp

In 2020, it came to light that the Dutch tax authorities had used algorithms that mistakenly labelled around 26,000 parents as having committed fraud in their childcare benefit applications. Many of these parents had an immigration background. They were required to pay back large sums, which led to great financial and psychological difficulties for the families concerned. The data protection authority concluded that the processing of data by the AI system in use was discriminatory.

Ouch. The discriminatory Apple Credit Card

Customerfocus

This is a faux pas from a company that leads the innovation curve.

When Apple launched its famous Credit Card, a tweet went viral from web programmer and author David Heinemeier Hansson on accusing the card of giving his wife 20 times lower credit than himself, despite the fact the couple filed joint tax returns and she has a higher credit score.

Hansson wrote: "The AppleCard is such a fxxxg sexist program. My wife and I filed joint tax returns, live in a community-property state, and have been married for a long time. Yet Apple's black box algorithm thinks I deserve 20x the credit limit she does. No appeals work." There’s more. Janet Hill, wife of Apple co-founder Steve Wozniak, was given a credit limit only amounting to 10 percent of her husband’s and Steve himself admitted about the Card’s faults.

Bias that can even cost lives.

Customerfocus

A lot of researchers are worried about algorithms for skin-cancer detection. Most of them do not perform well in detecting skin cancer for darker skin because they were trained primarily on light-skinned individuals. The developers of the skin-cancer detection model didn't apply principles of inclusive design in the development of their models.

The answer to the biases. Consider Human-in-the-Loop systems

Customerfocus

Most Machine Learning models rely on data that has been prepared by humans. But the interaction between humans and machines doesn't stop there – the most powerful systems are set up such that they allow both sides to interact continuously through a mechanism commonly referred to as Human in the Loop (HITL)

We think that the term is a bit off-putting as it implies that the machine is calling the shots over humans – the opposite is true. The goal of Human-in-the-Loop technology is to do what neither a human being nor a computer can accomplish on their own. When a machine isn’t able to solve a problem, humans need to step in and intervene. This process results in the creation of a continuous feedback loop. With constant feedback, the algorithm learns and produces better results every time, resulting in more accurate data sets and improved safety and precision

Answering biases… more.

Customerfocus

There are many things that can be done. Testing algorithms in a real-life setting is a big one.

Let’s take job applicants, for one. Your AI-powered solution might not be trustworthy if the data your machine learning system is trained on comes from a specific group of job seekers. While this might not be an issue if you apply AI to similar applicants, the issue occurs when using it to a different group of candidates who weren’t represented in your data set. In such a scenario, you essentially ask the algorithm to apply the prejudices it learned on the first candidates to a set of individuals where the assumptions might be incorrect.

To prevent this from happening and to identify and solve these issues, you should test the algorithm in a manner comparable to how you would utilize it in the real world. Accounting for so-called counterfactual fairness. A more subjective look when it comes to science and technology education, experts recommend many steps to mitigate algorithmic bias.

The good news is that the industry is listening.

Know more about Annova solutions Click here
Write to us at contact@annovasolutions.com
Acknowledgement: This article has been sourced from some of the most respected names in journalism across the world.

Annova Solutions Pvt. Ltd.

Quick Links

Our services

About us

Contact

Our services

Healthcare

AI/ ML Operations

Digital BPO

News & Updates