HomeWinBuzzer NewsChatGPT Appears to Respond Better When Promised Tips, Test Finds

ChatGPT Appears to Respond Better When Promised Tips, Test Finds

A programmer named Thebes found that pretending to tip ChatGPT the OpenAI chatbot give better answers.

-

A study spearheaded by a programmer named Thebes has unveiled that 's , ChatGPT, seems to provide more detailed and better-quality responses when users simulate tipping it. The findings of this experiment, which involved the use of conditional statements regarding tipping based on the chatbot's performance, have sparked discussions about the influence of training methodologies on AI behaviors.

How Pretending to Tip Influences Output

During the evaluation, was asked to deliver the code for a basic convolutional neural network (convnet) using the PyTorch framework. The programmer presented the AI with three scenarios: no tip for poor-quality responses, a $20 tip for perfect solutions, and up to a $200 tip for exemplary solutions. Analysis of the responses generated after these prompts revealed that the AI's outputs were significantly better when a tip was on the table.

However, it must be noted that despite the apparent improvement, the AI explicitly declined to accept any form of tip, reiterating that its sole purpose is to provide information and assist users to the best of its abilities, as designed by OpenAI.

Implications for AI Development and User Interaction

These findings have implications for AI-powered chatbot development and the future interaction models between humans and AI. The notion that virtual incentives can potentially improve an AI's response suggests that the nuances of human economic behavior could be extended to digital interactions. While it is evident that tangible incentives like tips and bonuses can motivate human employees, the study shows an analogous effect on AI, indicative of the complex layers at play in its training material.

Additionally, the experiment sheds light on the importance of carefully designed user interactions and prompts in eliciting optimal AI performance. As AI advances towards achieving more sophisticated levels of engagement, this study raises significant questions around the ways in which AI might assimilate human-like incentives to improve task execution, and calls into question the boundaries of AI understanding and responsiveness to human social constructs.

ChatGPT Leaking Training Information

In a separate study I reported on last week, researchers found that prompting ChatGPT to repeat a word repeatedly can extract its training data. The research, detailed in a new paper authored by a collective of computer scientists from industry and academia, exhibits that instructing  to iterate a single word numerous times can eventually lead to the generation of seemingly random text. 

Sometimes, this output contains direct quotes from online texts, which means that it is repeating parts of what it learned from. This can be detected by a method called a ‘divergence attack', which makes the model stop talking normally and produce unrelated text strings.

The data that comes out can include pieces of code, adult content from dating sites, passages from books, and personal information like names and contacts. This is very worrying because this data might be private or sensitive.

SourceThebes
Luke Jones
Luke Jones
Luke has been writing about all things tech for more than five years. He is following Microsoft closely to bring you the latest news about Windows, Office, Azure, Skype, HoloLens and all the rest of their products.

Recent News