Background
Machine Reading Comprehension (MRC) enables computers to read, process, and understand natural language text, considered to be one of the core abilities of artificial intelligence. It is of great value for next-generation search engines and intelligent agent products, and has received wide attention across academia and industry in recent years.
Last year, the China Computer Federation(CCF), Chinese Information Processing Society of China(CIPS), and Baidu Inc. jointly organized " 2018 NLP Challenge on Machine Reading Comprehension", which greatly promoted the development of MRC technologies. The winning systems in the 2018 competition are able to answer more than 75% of the questions correctly (see "the overview of the 2018 competition" for details). 2019 Language and Intelligence Challenge will continue to set up the MRC task, focusing on hard questions that current MRC systems fail to answer correctly. The aim of this year’s challenge is to comprehensively evaluate the ability of machines to conduct in-depth natural language understanding so to answer complex questions.
The challenge provides a large-scale, open-domain, application-oriented Chinese MRC dataset and a platform for research and academic exchanges on MRC, NLU and other AI technologies. The competition forum and award ceremony will be held at the fourth "Language & Intelligence Summit". All researchers and developers are welcomed to this task.
About the Task
1. Task Description
Given a question "q", and a set of documentsD = d1, d2, ..., dn,the participating MRC system is expected to output an answer "a" that best answers "q" based to the evidences in D.
2. Dataset
This task is an extension of 2018 NLP Challenge on Machine Reading Comprehension. The dataset contains 280k questions sampled from real anonymized user queries from Baidu Search. Each question has 5 corresponding evidence documents and human generated answers. The dataset is divided into a training set (270k questions), a development set (about 3k questions) and a test set (about 7k questions). The training set is the same as that used in the 2018 competition, which has been released in DuReader dataset. The development and test sets consist of complex questions that the winning systems in the 2018 competition fail to answer correctly. On these complex questions, MRC systems perform substancially worse than humans. How to enable machines to answer complex questions is still a challenging problem. The new development and test sets will be available for download after the registration deadline.
3. Evaluation Metrics
ROUGE-L and BLEU4 are adopted as the basic evaluation metrics to measure the performance of participating systems, with the former as the main measurement. Some minor modifications are made over the original ROUGE-L and BLEU4 metrics to better measure the performance of YES-NO and ENTITY type questions. ( please refer to the paper for the evaluation metrics. )
*Please refer to the specification enclosed in the dataset package for details.
4. Baseline Systems & GPU Computing Resources
This year’s challenge provides two open sourced baseline systems, implemented in both PaddlePaddle and TensorFlow. Compared with the 2018 competition, the PaddlePaddle baseline has been upgraded and gets better performance. Please refer to source and dataset paper for details. Baidu AI Studio provides free GPU Cluster and baseline.
Participation Info
1. Eligibility
The task is open to all individuals, research institutions, colleges, universities, and enterprises in related fields.
2. Registration
Please click the register button on the top right corner to sign up. If you have any questions, please email us or scan the Q-code on the right to ask.
*Teams who registered and submitted valid results will get a Memorial T-shirt for each member.
3. Registration Deadline
March 31st, 2019
Scan QR code to join group chat
Timeline
Feb 25
Registration open, training set available
Mar 31
Registration close, dev set & partial test set available
May 13
Full test set available
May 20
Testing results submission due
May 31
Final results announcement, system report submission
Aug 24
Workshop and award ceremony
Awards Setting
The challenge will award one First Prize, two Second Prizes and two Third Prizes. Winners will get the award certificates issued by CCF& CIPS . The prizes and travel grants for attending the workshop and award ceremony will be sponsored by Baidu Inc..
¥30,000
award certificate
¥20,000
award certificate
¥10,000
award certificate
*Notes:
1. All prizes are inclusive of taxes.
2.The award requires participants to provide their system reports (including method descriptions, system code & data, references, etc.) and name lists of team members.