欢迎,计算机科学与信息计算爱好者!

标签:language

LayoutLM: Pre-training of Text and Layout for Document Image Understanding

LayoutLM: Pre-training of Text and Layout for Document Image Understanding Yiheng Xu1∗, Minghao Li2, Lei Cui3, Shaohan Huang3, Furu Wei3, Ming Zhou31Harbin Institute of Te……

Does Object Recognition Work for Everyone?

Does Object Recognition Work for Everyone? Terrance DeVries   Ishan Misra11footnotemark: 1   Changhan Wang11footnotemark: 1   Laurens van der Maaten……

Folding-based compression of point cloud attributes

Folding-based compression of point cloud attributes Abstract Existing techniques to compress point cloud attributes leverage either geometric or video-based compression tools. ……

Attention Is All You Need

Attention Is All You Need /ANDAshish VaswaniGoogle Brainavaswani@google.com&Noam Shazeer11footnotemark: 1Google Brainnoam@google.com&Niki Parmar11footnotemark: 1Google Res……

Fast Sparse ConvNets

Fast Sparse ConvNets Erich ElsenMarat Dukhan1Trevor Gale 1Karen SimonyanDeepMindGoogleGoogleDeepMind{eriche, maratek, tgale, simonyan}@google.comThese authors contributed equally……

Small-GAN: Speeding up GAN Training using Core-Sets

Small-GAN: Speeding up GAN Training using Core-Sets Samarth SinhaHan ZhangGoogle BrainAnirudh GoyalMila, Université de MontréalYoshua BengioHugo LarochelleGoogle Bra……

Revisit Knowledge Distillation: a Teacher-free framework

Revisit Knowledge Distillation: a Teacher-free framework Li Yuan1,  Francis E.H.Tay1,  Guilin Li2,  Tao Wang1,  Jiashi Feng11National University of Sin……

Self-training with Noisy Student improves ImageNet classification

Self-training with Noisy Student improves ImageNet classification Qizhe Xie  1, Eduard Hovy2, Minh-Thang Luong1, Quoc V. Le11Google Research, Brain Team, 2Carnegie Mello……

Model Cards for Model Reporting

Model Cards for Model Reporting Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, Timnit Gebrummi……

Fine-Tuning a Transformer-Based Language Model to Avoid Generating Non-Normative Text

Fine-Tuning a Transformer-Based Language Model to Avoid Generating Non-Normative Text Xiangyu Peng1  Siyan Li1  Spencer Frazier1&Mark O. Riedl1/affiliati……