HomeWinBuzzer NewsShepherd: Meta AI Presents New Approach for Better AI Language Model Output

Shepherd: Meta AI Presents New Approach for Better AI Language Model Output

The research team behind Shepherd leveraged feedback from two online communities to improve the model output significantly.

-

Meta AI has introduced Shepherd, a language model designed to critique and refine the outputs of other models. The new approach aims to address common issues in language model outputs, such as factuality, logical errors, coherence, and alignment.

In their research paper Shepherd: A Critic for Language Model Generation, the Meta AI research team describes a language model that is explicitly tuned to critique model-generated outputs and to generate feedback to suggest improvements.

Utilizing Community Feedback For Better Output

The research team behind Shepherd leveraged feedback from two online communities: Stack Exchange and the Pushshift Reddit Dataset. Pushshift is a social media data collection, analysis, and archiving platform that since 2015 has collected Reddit data and made it available to researchers. Pushshift’s Reddit dataset is updated in real-time, and includes historical data back to Reddit’s inception.

The data from both sources was structured into a question-answer-critique format. The team believes that by incorporating diverse feedback from these platforms, Shepherd can offer a wide range of critiques that reflect real-world user perspectives.

To ensure the quality of the critiques, the team employed various data refinement techniques. These included keyword filtering, analyzing user edit histories, and incorporating community vote scores. The researchers stated in their paper, “To curate valid critiques, several techniques were employed, including keyword filtering and user edit history analysis.” They further explained that these methods helped in identifying critiques that led to the refinement of original answers.

Training and Model Comparison

For Shepherd’s training, the LLaMA-7B model was used as a base. LLaMA 7B is the smallest LLaMA large language model variant from Meta which was trained on one trillion tokens. The  The model’s primary function is to critique answers generated by other language models. When compared with other models like Alpaca 7B, SelFee-7B, and ChatGPT, Shepherd’s performance was notable. The research paper mentioned, “Shepherd achieved a win-rate of 53-87% against competitive alternatives.” This indicates that while Shepherd has a smaller size of 7B parameters, its critiques are on par or even preferred over established models like ChatGPT / GPT-4 from OpenAI.

The introduction of Shepherd highlights the potential of refining AI outputs using community feedback. As the research paper suggests, “Shepherd’s strength lies in its ability to leverage a high-quality feedback dataset curated from community feedback and human annotations.” Meta´s new approach could pave the way for future models that prioritize real-world feedback to enhance their outputs.

Last Updated on November 8, 2024 12:02 pm CET

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Mastodon