You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this paper, we analyze the performance of a multitask end-to-endtransformer model on the task of conversational recommendations, which aim toprovide recommendations based on a user's explicit preferences expressed indialogue. While previous works in this area adopt complex multi-componentapproaches where the dialogue management and entity recommendation tasks arehandled by separate components, we show that a unified transformer model, basedon the T5 text-to-text transformer model, can perform competitively in bothrecommending relevant items and generating conversation dialogue. We fine-tuneour model on the ReDIAL conversational movie recommendation dataset, and createadditional training tasks derived from MovieLens (such as the prediction ofmovie attributes and related movies based on an input movie), in a multitasklearning setting. Using a series of probe studies, we demonstrate that thelearned knowledge in the additional tasks is transferred to the conversationalsetting, where each task leads to a 9%-52% increase in its related probe score.
URL
Affiliations
Abstract
Translation (by gpt-3.5-turbo)
この分野の以前の研究では、対話管理とエンティティ推薦タスクを別々のコンポーネントで処理する複雑なマルチコンポーネントアプローチが採用されていたが、T5テキストトゥーテキストトランスフォーマーモデルに基づく統合トランスフォーマーモデルが、関連するアイテムの推薦と会話の対話生成の両方で競争力を持つことを示す。
ReDIAL対話型映画推薦データセットでモデルをファインチューニングし、MovieLensから派生した追加のトレーニングタスク(入力映画に基づく映画属性と関連映画の予測など)をマルチタスク学習の設定で作成する。
一連のプローブスタディを使用して、追加タスクで学習された知識が対話設定に転移され、各タスクが関連するプローブスコアの9%〜52%の増加につながることを示す。
Summary (by gpt-3.5-turbo)
The text was updated successfully, but these errors were encountered: