Cloud 101CircleEventsBlog
Get 50% off the Cloud Infrastructure Security training bundle with code 'unlock50advantage'

How Much Can You Trust ChatGPT? – Establishing Identity and Security While Using AI

Published 06/05/2023

How Much Can You Trust ChatGPT? – Establishing Identity and Security While Using AI

Originally published by DigiCert.

Written by Timothy Hollebeek.

Over the last few months, we’ve seen an increase in the use of AI and machine learning with the widespread adoption of ChatGPT and other AI tools. The concept of these tools is exciting and can help save time, simplify research and offer 24/7 reliability to do exactly what you ask.

But with this new AI tool comes a new problem: how do we know what's true? How can you establish trust in its outputs?

I'm aware of the concerns with ChatGPT and looking forward at how the AI industry can establish better trust and security. I’ll dive into some of those concerns and how trust could be established for AI in this post.

ChatGPT knows it doesn’t know everything

ChatGPT is designed to provide accurate information to users, but as with any technology, it is not infallible. Sometimes ChatGPT makes mistakes. It’s prone to bias, has limited information past 2021, has security and privacy concerns and has even been known to make up information — and for some issues it is self-aware.

When we asked ChatGPT what the top challenges and concerns are with using it, it responded with the following:

While ChatGPT has many benefits, there are also some challenges that come with using an AI language model:

  1. Lack of empathy: ChatGPT is a machine and does not have the ability to empathize with users. This means that it may not be able to provide emotional support or understand complex human emotions.
  2. Limited understanding: ChatGPT's responses are based on its training data, which may not always accurately reflect the nuances and complexities of real-life situations. As a result, there may be instances where its responses are incomplete or incorrect.
  3. Security and privacy concerns: ChatGPT requires access to user data to provide accurate responses, which raises concerns about data privacy and security. Users need to be careful about the information they share with ChatGPT and ensure that they are using a secure platform to interact with the AI language model.

ChatGPT has also been known to make up information that sounds plausible or to spread misinformation. However, sometimes ChatGPT gets it wrong because the internet had it wrong first.

Is ChatGPT wrong, or is the internet?

The truth is that both can be at fault. ChatGPT relies on the information it has been trained on, which can sometimes be inaccurate or out of date. And the internet as a whole is filled with unverified information that can be misleading or outright false.

Recently, my wife and I drove to a museum that our phones insisted was open, only to find it closed for the season. This kind of misinformation can be frustrating and even dangerous, especially if it leads to wasted time, money or effort. Furthermore, it can be exploited by bad actors.

ChatGPT can be used for social engineering & phishing

Just like any advancement, AI may make our lives and work easier, but that means it can also make an attacker’s job easier. Not surprisingly, we’ve seen people using AI to write scams or phishing messages. Some of them are quite convincing, including a recent example of scammers using AI to sound like a family member in distress. Although the terms of use don’t directly address scams, they do restrict the use of ChatGPT for anything “that infringes, misappropriates or violates any person’s rights.” However, that hasn’t stopped bad actors from using AI tools like ChatGPT for writing scams and even offensive content. Thus, we advise users to be cautious when engaging online, and to educate themselves on the ways to spot, avoid and report phishing.

Don’t share all your information with ChatGPT

There are also privacy issues with ChatGPT. Any information shared with ChatGPT will be kept. Just as you wouldn’t share sensitive, proprietary or confidential data or code online, you shouldn’t share it with any AI, as the AI will store it and use it to continue learning. Imagine sharing confidential information with the AI and then another user asks about it and has access to that information because you shared it. If you wouldn’t share it publicly, then you probably shouldn’t share it with ChatGPT. Additionally, employers should provide training and guidance on how to use ChatGPT securely and privately.

Establish trust through independent verification

The solution to establishing trust in AI is independent verification. Certificate Authorities (CAs) play an important role in providing independent verification of information so that it can be relied upon. CAs are organizations that verify the authenticity of other organizations and their information, which are used to secure online transactions and communications. The best-known examples are the verification of domain ownership and legal identities, but there are no technical reasons to restrict independent verification of such information.

By verifying that a digital certificate belongs to the entity it claims to belong to, a CA can help ensure that sensitive information is protected and that users can trust the websites and services they use. But CAs can also verify other types of information, such as business registration, domain ownership and legal status. Many newer public key infrastructure (PKI) efforts are attempting to extend digital trust to additional verifiable attributes, like professional titles, where and how digital keys were generated, whether an image is protected by a trademark and so on.

Thus, it is important that information be accurate in order to be helpful. The best information is information independently verified, by organizations like CAs. While ChatGPT and the internet can be useful tools for finding information, they should not be relied upon exclusively. By combining technology with independent verification, we can ensure that the information we use is trustworthy and reliable.

AI is in its Wild West stage — but it will mature and trust will follow

Right now, it’s a bit of a Wild West for AI. But I anticipate that as the AI industry matures, more regulation will be put in place against using AI for malicious purposes, similar to how we’re seeing more regulation now in sectors like the IoT and software development. I wouldn’t be surprised if in the future we use a verification solution for AI outputs like digital certificates to certify that content is authentic. PKI is a tried-and-true technology that has been applied to numerous use cases from devices, users, servers, software and content to establish digital trust, so why not to AI?

Share this content on your favorite social network today!