The Dark Side of AI: How ChatGPT Can Mislead and Misinform

That didn’t take long. Within weeks of the launch of ChatGPT, as people started to appreciate that the long-awaited AI inflection may have finally dawned, the warnings began.

Take the education sector. ChatGPT has already passed a medical licencing exam in the USA, along with a Wharton Business School exam meant for MBAs.

Before the first child had returned to school in four Australian states; Queensland, Western Australia, Victoria and the largest – NSW– banned its use by students.

Copyright owners are already shouting from the rooftops about IP rights, with the BBC reporting that the tool is sparking plagiarism fears.

But that may be the least of our problems.

According to Louis Rosenberg, CEO of Unanimous AI, founder of Emersion Corp and considered a pioneer of augmented and virtual reality, “Most pundits and policymakers are still not seeing the real danger that LLMs pose to society.”

Earlier this month he warned, “Everyone is focused on LLMs having accuracy problems or being used by students to cheat on assignments - those issues will be resolved.”

According to Rosenberg, the real danger of LLMs is not their flaws, but their capabilities.

“We are about to enter a new age of Conversational Computing and despite the many benefits, it will unleash a new level of danger for targeted manipulation, persuasion, and coercion.”

Who needs an email from a Nigerian prince when the machine can mimic your spouse’s words, right down to their turn of phrase?

Generative AI tools like ChatGPT offer transformational change but they also come with serious risks. In addition to concerns over plagiarism and manipulation, debate has already begun about the implications for privacy, compliance, and of course, the impact on employment.

On that last point the creator of ChatGPT, OpenAI, is already in market warning that ChatGPT won’t take your job – but the person using ChatGPT will. Subtle.

Business risks

So how does the machine rate its own danger to society? We asked ChatGPT to describe the dangers and risks of ChatGPT and to outline what approaches can organisations take to mitigate these risks? That’s probably a sensible short term bet as much as it is a long term losing proposition:

“ChatGPT, as a language model, can pose several dangers and risks such as:

·      Bias and discrimination: Models trained on biased data can perpetuate and amplify these biases in their outputs, leading to discrimination.

·      Misinformation and fake news: Models can generate false or misleading information, which can have serious real-world consequences.

·      Privacy violations: ChatGPT or similar models can pose a threat to privacy, as they can be used to generate sensitive or personal information”.

To mitigate these risks, ChatGPT says organisations can take the following approaches:

1.            Data ethics: Ensure that the data used to train models is diverse, representative, and free of bias.

2.            Model transparency: Make the models and their training data open and transparent to allow for independent verification and monitoring.

3.            Human oversight: Incorporate human oversight into the decision-making process to catch and correct any errors or biases generated by the model.

4.            Regular auditing and testing: Regularly audit and test the models to identify any potential biases or errors and to ensure that they are functioning as intended.

5.            Regulation: Support and adhere to ethical and privacy regulations to ensure that the development and deployment of language models align with societal values and norms.

6.            Model limitations: Clearly communicate the limitations and uncertainties of the model's outputs to users and clearly state that the model is not a substitute for human judgment.

It missed a couple. According to the humans we spoke to when preparing this story (at least we are pretty sure they were humans) the key risk is also the simplest, the most likely, and probably the most dangerous to the typical business: the content generated by language learning models like ChatGPT is only as good as queries entered into the system.

 

Prompt engineering

Executives from fields such as technology, innovation, data science, marketing and customer experience who have invested significant time studying ChatGPT and tools like it all coalesced around a single yet powerful thought. Such tools are at their most powerful when interrogated by staff with a deep understanding of how their business operates and the problems it solves for customers. Well-considered, competently crafted, and occasionally complex queries often yield solid results. But there is an art to crafting these questions so as to elicit a workable response.

The problem, however, is that eliciting that response often requires multiple capabilities that are not necessarily found in a single person. Asking the right question of the software is a new and emerging discipline called “prompt engineering.”

A Chief Marketing Officer may have the necessary skills to understand the customer proposition, while a corporate librarian may have the capabilities to craft technically complex queries – but rarely will you find both in one person. Worse, the staff who get assigned these jobs are more likely to be junior than senior.

And then there is the matter of interpreting, testing and reality checking the results.

ChatGPT is basically a helpful sociopath. It can deliver strong messaging with an authoritative voice, but it’s also capable of making stuff up as Christopher Graves, President & Founder, Ogilvy Center for Behavioral Science at Ogilvy warned earlier this year.

On LinkedIn he wrote: “Beware if you are using CHATGPT to help find scientific studies or even cite studies to support its assertions. It will not hesitate to lie and fabricate them.”

He then provided an example of an exchange with ChatGPT which resulted in the toolset fabricating very specific study titles and authors, and then confessing to what it had done.

“I asked it to not lie and again to seek actual studies. It lied again.”

When we asked ChatGPT (in several different ways) to describe how organisations might mitigate the risks it poses, it talked about complying with privacy and anti-discrimination laws, respecting intellectual property laws and seeking permission from the owners of the IP. It also flagged the need for algorithmic transparency and legal oversite, and it recommended writing some rock solid contracts to limit your legal liability.

The thing it did not recommend was the very thing the humans executives we spoke to stressed – train your staff and develop new capabilities and skills in the teams that will use these tools.

It’s only going to get worse

To quote the Marx Brothers, ‘who are you going to believe – me, or your lying eyes?’

It might be apocryphal, but Chico had a point. Language learning models, along with video and voice AI have an almost limitless capacity for fraud – constrained only by the size of the data set the models are built on, Moore’s Law, and the imagination of the people behind the grift.

That voice on the phone might not really be your business partner even though it knows details about your fishing holiday and even uses their familiar mannerisms which it can ingest from social media.

That man in the video clip you just watched on what you believed was Bloomberg might not really have been Jerome Powell announcing an emergency and immediately market intervention. It might not actually have been a Bloomberg report, at all.

And who needs a Nigerian prince when the machine can simply call up and mimic your kids to get your banking or credit card details?

Note: image courtesy of Dall-E 2: “painting in the style of Edvard Munch depicting the damage AI can do to society”