Technology makes life easier and provides many opportunities. The impact of technology on humankind is great. It is playing a great role in our lives. Technology is advancing so fast over time, that we can now use it select job candidates using AI and machine learning.
AI machines match CVs to job openings and many other things that make finding employees easier. This technology, however, has not only positive, but also negative effects in human life. are also using AI for different purposes. This report specifically refers to technology using in labour markets for recruiting and while building a model in algorithms.
Generally, AI machines are not biased by themselves but can be modelled according to the preferences and interests and biases of people, thus they serve accordingly, and many have no idea about it. In this report we will look at some of the secret discrimination made and the solutions.
Some Examples
Here are some examples of the following forms of discrimination. There are many other forms, but we’re zooming in on these ones.
- Statistical bias
- Gender discrimination
- Skill bias
- Racial discriminations
Statistical Bias

When building the model, professionals remove the outliers. Outliers are odd from the group of data. It can be one out of hundred. All groups of the dataset should be included while building the model. This is a sort of discrimination. The minorities from the group should not be removed.
When building the model, as much minorities or outliers as possible should be included. Otherwise there must be a document describing a concrete reason why the outliers are removed.
Gender Discrimination

Some people consider women are lower than men. The AI can be modelled to fit only men. So, they do not consider women in selection process. And in some cases, they argue that because woman get pregnant and may be absent many times. This is actually discrimination.
If the AI machine is selecting only male candidates. It must be examined how it is modelled. Organizations should follow diversion and inclusion policy. Women should be considered equal to men. Instead, organizations should encourage women to work for them. The model needs to be rewritten to remove the gender bias.
Skill Bias

AI can be used for selections as search engines to match Curriculum Vitae with vacancies of job. Instead of building the machine for people with intermediate skills, which can be perfectly adequate, they use the machine to match the highest skilled CV’s to fulfil the job. So, the intermediate level potential employees are now deprived of a job.
Recruiters should give chance to all levels. Employers could create a traineeship or trajectory to raise up the skills of people. Algorithms should not be built to only search for higher skilled professionals.
Racial Discrimination

As the machines are programmed or modelled by human being. Professionals can model the search engines to profit certain groups, tribes or beliefs, or skin colour. Candidates have no idea they’re being selected by a discriminatory algorithm.
If the Al machine is selecting frequently only a certain group of tribes or skin colour or beliefs. Then the model needs to bring under examination and AI should be modelled again correctly. The organization should follow diversity and inclusion policy.