Commentary: How to catch lying CEOs who fool even professional financial analysts

[ad_1]

QUESTIONS OF PRIVACY AND ETHICS REMAIN

The widespread use of AI to catch lies would have profound social implications – most notably, by making it harder for the powerful to lie without consequence.

That might sound like an unambiguously good thing. But while the technology offers undeniable advantages, such as early detection of threats or fraud, it could also usher in a perilous transparency culture.

In such a world, thoughts and emotions could become subject to measurement and judgment, eroding the sanctuary of mental privacy.

This study also raises ethical questions about using AI to measure psychological characteristics, particularly where privacy and consent are concerned. Unlike traditional deception research, which relies on human subjects who consent to be studied, this AI model operates covertly, detecting nuanced linguistic patterns without a speaker’s knowledge.

The implications are staggering. For instance, in this study, we developed a second machine learning model to gauge the level of suspicion in a speaker’s tone. Imagine a world where social scientists can create tools to assess any facet of your psychology, applying them without your consent. Not too appealing, is it?

As we enter a new era of AI, advanced psychometric tools offer both promise and peril. These technologies could revolutionise business by providing unprecedented insights into human psychology. They could also violate people’s rights and destabilise society in surprising and disturbing ways. The decisions we make today – about ethics, oversight and responsible use – will set the course for years to come.

Steven J Hyde is Assistant Professor of Management at Boise State University. This commentary first appeared on The Conversation.

[ad_2]

Source link