Computer-Brain InterfacesRemote Neural Monitoring

Companies using controversial AI software to help hire or fire employees – and gauge whether they like their boss

Annie Palmer | The Daily Mail | Source URL

Sometime soon, artificial intelligence could be to blame for you getting hired or fired.

A growing number of employers have turned to AI to make hiring and firing decisions, as well as to determine how people feel about their bosses, according to the Wall Street Journal.

One of the most popular kinds of workplace-focused AI software out there is called Xander and it can determine whether an employee feels optimistic, confused or angry, among other things.

To do this, Xander analyzes responses to open-ended questions, assigning attitudes or opinions to employees based on language and other data, the Journal said.

In the past, companies have often used technology to keep track of employee actions and increase productivity.

It’s only recently, however, that AI has increasingly become a useful tool for measuring employee sentiment, as well as in hiring, firing and compensation.

https://www.facebook.com/UltimateSoftware/videos/10155604007291832/

More than 40% of global employers have used AI processes of some kind, according to the Journal.

At the same time, the emergence of AI in human resources has some regulators worried.

Experts and employment lawyers have grown concerned that AI may contain biases that could lead to workplace discrimination.

And other employees simply don’t want to be tracked with AI.

AI still faces some issues recognizing all human emotions, like depression and sarcasm.

Algorithms have been shown on numerous occasions to be susceptible to racial biases.

A hiring algorithm may pick up on a higher rate of absences for people with disabilities and recommend against hiring them, the Journal said.

So it’s unsurprising, then, that the Equal Employment Opportunity Commission determined in 2016 that the technology can potentially create new barriers for opportunities.

While AI is nowhere near as smart as a human, the technology has continued to advance in recent years.

For example, a new artificial intelligence that is learning to understand the ‘thoughts’ of others has been built by Google-owned research firm DeepMind.

The software is capable of predicting what other AIs will do, and can even understand whether they hold ‘false beliefs’ about the world around them.

DeepMind reports its bot can now pass a key psychological test that most children only develop the skills for at around age four.

Its proficiency in this ‘theory of mind’ test may lead to robots that can think more like humans.

Most humans regularly think about the beliefs and intentions of others, an abstract skill shared by a fraction of the animal kingdom, including chimps and orangutans.

This ‘theory of mind’ is key to our complex social interactions, and is a must for any AI hoping to imitate a human.

How does artificial intelligence learn?

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn.

ANNs can be trained to recognise patterns in information – including speech, text data, or visual images – and are the basis for a large number of the developments in AI over recent years.

Conventional AI uses input to ‘teach’ an algorithm about a particular subject by feeding it massive amounts of information.

Practical applications include Google’s language translation services, Facebook’s facial recognition software and Snapchat’s image altering live filters.

The process of inputting this data can be extremely time consuming, and is limited to one type of knowledge.

A new breed of ANNs called Adversarial Neural Networks pits the wits of two AI bots against each other, which allows them to learn from each other.

This approach is designed to speed up the process of learning, as well as refining the output created by AI systems.

Leave a Reply

Your email address will not be published. Required fields are marked *