Since their inception, AI tools have taken the world by storm for being a life-transforming asset to the modern face of the world.
Generative AI tools, particularly ChatGPT, are equipped with capabilities to give surprising results that are seemingly unreal and surpass the limitations of any machine or human intelligence that ever existed.
Given that, these tools have been adopted even by tech giants such as Microsoft spending billions on AI technology.
Similarly, all sorts of workplaces have incorporated this technology to enable them to rejoice in countless benefits including increased efficiency and productivity.
However, while having revolutionised the way we work and learn, these tools have also introduced a set of challenges pertaining to the generation of information, making it more difficult to detect false and malign information.
This challenge has prompted enterprises frequently dealing with information to prepare for the AI-driven world where the scope of errors, misinformation and disinformation has been widened.
This drastic menace has highlighted the need for the policymakers of organisations to ensure they have the skills and knowledge to confront this looming threat.
Occurrence and detection of AI-generated information
In work-related settings, what serves as a powerful and authentic validation for AI tools generating false information is a test conducted by R. Paulo Delgado —a US-based ghostwriter specialising in technology— wherein he tested the first 1,000 words of the US Constitution on Originality.ai, a popular AI-detection tool.
The tool presented false detection results, claiming that the first part of the constitution has been produced by AI.
Giancarlo Erra, the founder of various AI-related startups — such as Words. Tel, Copy Forge, and Tweetify It — reflected on the extent of veracity and truth in AI-generated information.
“In my experience, AI detection tools give unpredictable and different results on the same piece of text. Sometimes they’re correct. Other times, they’re completely off, flagging something human-made or vice versa,” he said.
To trick an AI detection check, editing the content is sufficient. “If I get the most basic AI text marked as 100% AI-generated by most tools, I usually make straightforward changes, such as changing transitional adverbs, often overused by AI to join paragraphs,” said Erra, while adding that changing the tenses only can make the content look more genuine.
AI-based misinformation in workplaces has led to threats and hurt businesses. Employee mistrust may result from untrue information regarding a company's rules or practices, layoffs, financial soundness, or other problems. The use of AI-powered misinformation has had a more negative effect on one-third of enterprises.
Hany Farid, a professor of computer science at the University of California, Berkeley, in an interview with Wired, an American magazine, said, “If I want to launch a disinformation campaign, I can fail 99 per cent of the time.”
In the same interview, Farid maintained that most false information-led campaigns are bound to be foiled and failed at some point, but the ones that don’t fail are those backed by AI which possesses the potential to wreak havoc.
Another credible source who spoke of the unfavourable repercussions of AI-based information is Javin West, an associate professor at the University of Washington Information School.
He remarked that misinformation isn't always intentional, and as this technology lacks human conscience and rationale, it can generate its own false information by fabricating sources.
Other drawbacks of AI tools in work settings
- As this technology continues to evolve, the range of woes it has brought to the treatment of information is more likely to broaden.
- Being the ultimate pitfall of any technology, loss of employment has proven to be the most distressing outcome as these tools have the potential to replace a large number of workers, especially those in low-skilled positions.
- To withstand the ever-evolving capabilities of AI tools, employees of any workplace are found in critical need to excel in their relevant areas and possess competitive skills.
- The information produced by AI tools is as biased as the data they are trained on, and this has resulted in discrimination in workplace hiring, promotion, and other work-related chores.
- The proliferation of AI tools brings certain ethical considerations that should be addressed in the workplace, such as those pertaining to responsibility, transparency, and privacy.
- Loss of human interaction is another significant downside that has been witnessed in work environments, resulting in a decline in employee collaboration and social bonding.