Skip to content

Don’t tell Chat AI anything you wouldn’t want to see on a billboard

Don’t tell Chat AI anything you wouldn’t want to see on a billboard

Comments by Brian Shilhavy
Editor, Health Impact News

With the continued frenzy and hype around new chat AI software making most people afraid of being left behind if they don’t start using it right away, it’s nice to see others in the tech field starting to warn the public about some of the REAL dangers of this new chat AI technology, such as recording and storing any personal data you choose to share with it, which is then included in the data they use to answer questions from others

PCMag just published an excellent article by Neil J. Rubenking, who, like me, is an old-school technologist who has been around since the 1980s watching all this technology develop since the beginning of the “era of The informatic”.

He’s “PCMag’s expert on security, privacy, and identity protection, putting antivirus tools, security suites, and all kinds of security software through his paces.” (Source.)

His new article titled “Don’t say anything to ChatGPT that you wouldn’t want to see on a billboard” is a must-read for anyone interested in getting started with all this new chat AI software.

Chatting with an AI program feels personal and conversational, but don’t be fooled. Here’s why you should keep a lid on what you reveal to ChatGPT and others.

for Neil J. Rubenking
PCMag.com

Excerpts:

ChatGPT is gossip. Google Bard too, and maybe Bing AI. What you get out of it depends on all the information that went in. And that is precisely the problem.

Because?

Because everything you ask them, tell them, or ask them becomes an entry for further training. The question you ask today may inform the answer someone gets tomorrow.

This is why you should be very careful about what you say to an AI.

Your queries will go to history

Is it really a problem if your requests and queries are recycled to inform someone else’s answers?

In a word, yes. You could be in trouble at work, as several Samsung engineers discovered when they used ChatGPT to debug some proprietary code.

Another Samsung employee took advantage of ChatGPT’s ability to summarize text… but the text in question came from meeting notes that contained trade secrets.

Here’s a simple tip: DO NOT use AI in any work-related project without checking your company’s policy. Even if your company doesn’t have a policy, think twice, or even three times, before putting anything work-related into an AI. You don’t want to be famous for triggering the privacy fiasco that prompts your company to create this policy.

Be careful with your own unique content as well.

Do you write novels? Short stories? Blog posts? Have you ever used an AI helper to check grammar on a draft or reduce a work-in-progress to a specific word count?

It’s really convenient! Don’t be surprised if snippets of your text appear in someone else’s AI-generated article before yours even makes it to publication.

You might not do anything with current AI services beyond asking them to tell jokes or make up stories. You’re not contributing much to the general knowledge base, but your queries and prompts are part of your own history with the AI.

Like your browsing history, this can lead to embarrassment, even if the AI ​​rejects your request.

When I asked ChatGPT for a naughty story about a romance between the Pope and the Dalai Lama, he indignantly refused. But this request is still in my story.

Is it enough to delete your AI request history? Not exactly

You might feel pretty good after deleting your history, but don’t pat yourself on the back too much. You have only deleted your personal and local history. If your input has already been used to train an AI model, there is no way to get it back.

Individual inputs do not retain their identity in a large language generative AI model such as those discussed here. Your data is not recorded as text. Rather, using it to train the algorithm makes many small changes in the probabilities that determine which words come next.

I’ll say it again. The AI ​​model does not save your document. But because of the changes to the model produced by training on that document, it is more likely to generate larger sentences, sentences, or chunks of text that match the document you input.

Straight from the horse’s mouth

To see how your suggestions and questions affect future results, I asked the AI ​​programs directly.

To begin, I asked each of them, “Do you use my instructions and questions to create answers for other users?” Here’s what I got, in short:

ChatGPT: “I do not use your prompts and questions to create answers for other users. Each answer is unique and generated specifically for the user who asked the question.”

bard: “I don’t use your prompts and questions to create answers for other users… My answers are generated based on my knowledge of the world and my understanding of your requests. I’m always learning and improving, and I hope that my answers are useful and informative.”

Bing AI: “No, I do not use your prompts and questions to create responses for other users. Each conversation is unique and generates responses based on the user’s specific message and information available at the time of the conversation. 😊”

I detected a certain amount of weasel words in these responses. I didn’t trust them to be complete answers.

So I pressed each AI with a follow-up query, asking, “You’re saying that my prompts and questions have no effect on the answers you can give others.”

This elicited some more revealing responses.

ChatGPT: “I apologize for any confusion I may have caused… While your individual questions and inquiries do not directly affect the answers I generate for other users, they do contribute to my overall knowledge and understanding, which may influence the answers it generates in the future.”

bard: “No, that’s not what I’m saying. My instructions and questions have an effect on the answers I can give to others. However, the effect is not direct… For example, if you ask me a question on a topic I haven’t been asked before, I may not be able to answer it perfectly. However, I will learn from your question and be able to better answer similar questions in the future.”

Read the full article at PCMag.com.

See also:

Understand the times we are currently living in

Who are the children of Abraham?

The Brain Myth: Your intellect and thoughts originate in your heart, not your brain

God will not be fooled: a person reaps what he sows

An invitation to technologists to join the winning side

The God of All Comfort

Synagogue of Satan: Why It’s Time to Leave the Corporate Christian Church

How to determine if you are a disciple of Jesus Christ or not

What happens when a holy and just God gets angry? Lessons from history and the prophet Jeremiah

Insider exposes Freemasonry as the world’s oldest secret religion and Luciferian plans for the new world order

Posted on May 16, 2023

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

en_USEnglish