LLNL/LUAR

A Transformer-based author representation learning model

CommonProductProgrammingNatural Language ProcessingAuthor Verification
LLNL/LUAR is a Transformer-based model designed for learning author representations, primarily focused on cross-domain transfer research for author verification. Introduced in an EMNLP 2021 paper, it explores whether author representations learned in one domain can be transferred to another. Key advantages of the model include its ability to handle large datasets and facilitate zero-shot transfer across diverse domains such as Amazon reviews, fanfiction short stories, and Reddit comments. Background information includes innovative research in the field of cross-domain author verification and its potential applications in natural language processing. This product is open-source and follows the Apache-2.0 license, allowing for free use.
Visit

LLNL/LUAR Visit Over Time

Monthly Visits

490881889

Bounce Rate

37.92%

Page per Visit

5.6

Visit Duration

00:06:18

LLNL/LUAR Visit Trend

LLNL/LUAR Visit Geography

LLNL/LUAR Traffic Sources

LLNL/LUAR Alternatives