• #ACL2021NLP #ACL2021 Please check our group’s recent publication at the main conference of @aclmeeting. We uncovered a compositional generalization problem existing in NMT models and contributed a new dataset. Contributed by Yafu Li, Yongjing Yin, Yulong Chen, Yue Zhang.

  • Prof Yue Zhang leads the #NLP lab at Westlake University @Westlake_Uni. Our group focuses on machine learning-based natural language processing, as well as application-oriented tasks, such as web information extraction and financial market prediction. Welcome to join us!

  • #NLProc #ACL2021 G-Transformer for Document-level Machine Translation Paper:arxiv.org/abs/2105.14761 Code:github.com/baoguangsheng/ Our @aclmeeting paper at the main conference introduces locality bias to fix the failure of Transformer training on document-level MT data.

标签:transfer learning

Language Models as Zero-shot Visual Semantic Learner

Language Models as Zero-shot Visual Semantic Learner Yue JiaoUniversity of SouthamptonSouthampton, UK   Jonathon HareUniversity of SouthamptonSouthampton, UK  &emsp……

Federated Learning Meets Natural Language Processing: A Survey

Federated Learning Meets Natural Language Processing: A Survey Ming LiuDeakin University   Stella HoDeakin University   Mengqi WangDeakin University &emsp……

Transfer Learning in Electronic Health Records through Clinical Concept Embedding

Transfer Learning in Electronic Health Records through C++linical Concept Embedding Jose Roberto Ayala Solares   Yajie Zhu   Abdelaali Hassaine   Shi……

When a crisis strikes: Emotion analysis and detection during COVID-19

When a crisis strikes: Emotion analysis and detection during C++OVID-19 Alexander TekleElectrical and Computer EngineeringThe University of Texas at Austin/AndChau PhamComputer Sc……

Go Wider Instead of Deeper

Go Wider Instead of Deeper Fuzhao Xue, Ziji Shi, Yuxuan Lou, Yong Liu, Yang You Abstract The transformer has recently achieved impressive results on various tasks. To further i……

Improve Unsupervised Pretraining for Few-label Transfer

Improve Unsupervised Pretraining for Few-label Transfer Suichan Li1,, Dongdong Chen2,∗,†, Yinpeng Chen2, Lu Yuan2, Lei Zhang2, Qi Chu1, Bin Liu1, Nenghai Yu1,1Univer……

LARGE: Latent-Based Regression through GAN Semantics

LARGE: Latent-Based Regression through GAN Semantics Yotam NitzanTel-Aviv University&Rinon Gal*Tel-Aviv University&Ofir BrennerTel-Aviv University&Daniel Cohen-OrTel-A……

Malware Analysis with Artificial Intelligence and a Particular Attention on Results Interpretability

Malware Analysis with Artificial Intelligence and a Particular Attention on Results Interpretability Benjamin Marais1Orange Labs, France,2Department of Mathematics, LMNO, Universi……

Modelling Latent Translations for Cross-Lingual Transfer

Modelling Latent Translations for C++ross-Lingual Transfer Edoardo Maria Ponti1,2    Julia Kreutzer3    Ivan Vulić4   &……

Bridging the Gap between Language Model and Reading Comprehension: Unsupervised MRC via Self-Supervision

Bridging the Gap between Language Model and Reading C++omprehension: Unsupervised MRC via Self-Supervision Ning Bian1,2, Xianpei Han2, Bo Chen2, Hongyu Lin2, Ben He1,2, Le Sun21Sc……