AIbase
プロダクトライブラリツールナビゲーション

Reinforcement-Learning-for-Human-Feedback-RLHF

Public

This repository contains the implementation of a Reinforcement Learning with Human Feedback (RLHF) system using custom datasets. The project utilizes the trlX library for training a preference model that integrates human feedback directly into the optimization of language models.

作成時間2024-08-17T15:27:37
更新時間2025-02-15T21:02:32
3
Stars
0
Stars Increase