[A] Internship Description
Traditionally in classification, a learning example refers to a unique object 0A that must be categorized in some class given an input set of features. For example in sentiment classification, a text transformed in some learning representation can be classified either as positive, negative or neutral . But many other situations exist. In this internship, we are particularly interested in identifying relations between pairs of objects. In this case, a learning example (0A r 0B) refers to a pair of objects 0A and 0B that must be transformed in some learning representation so that the model can predict if some relation r holds between them, or not. For instance, in image classification, a model may have to predict if two patches are similar or not . Within this context, a vast majority of works have been tackling symmetric relations (e.g. matching images, synonymy detection), while asymmetric relations (e.g. textual entailment, temporal image ordering) have received less attention. Indeed, it has been shown that asymmetry is usually a difficult problem  as it involves the direction of the relation. So, if two objects OA and OB are in an asymmetric relation , let’s say (OA → OB), then the learning example (OA → OB) is a positive example, while (OB → OA) is a negative example. In this internship, we are particularly interested in designing learning models that can positively handle asymmetry. Different work directions will be studied such as (1) the development of specific asymmetric deep learning architectures [2, 9], (2) the computation of semantic feature spaces capable of handling asymmetry [7, 1, 3, 6] or (3) a combination of both ideas . In particular, multimodal (text, image and text/image) experiments will be performed on gold standard data sets in order to evaluate the performance of the different proposed models.
[B] Candidate Profile
The successful candidate must pursue Master or Engineering School level studies in Data Science, Computer Science, Applied Mathematics, or related scientific fields and show strong background in Mathematics, Machine Learning and Programming. Additional knowledge in Deep Learning (both theoretically and practically) will be highly appreciated as well as experience in Natural Language Processing and/or Computer Vision.
[C] Internship Organization The internship will start at the beginning of 2019 (January, February or March) and will last up to 6 months. It will take place at the CNRS GREYC UMR 6072 Laboratory in Caen (France). Some visits to the internship partner Cr´edit Agricole Brie Picardie in Serris near Paris will be planned. The candidate will be compensated following the rules in force for internships. Note that the internship is subject to the establishment of a preliminary internship agreement.
[D] Internship Perspectives Depending on the obtained results and the dedication/motivation of the successful candidate, there shall be a possibility to pursue PhD studies in collaboration with all the internship partners, i.e. CNRS, Normandy University and Cr´edit Agricole Brie Picardie.
[E] Application Procedure Applicants are requested to submit their application with an academic CV, copies of academic degree records and certificates, and two potential references (if possible). Applications must be sent directly to the internship coordinators: Gaël Dias (email@example.com), Youssef Chahir (firstname.lastname@example.org) and Houssam Akhmouch (email@example.com).
Note that the CNRS GREYC UMR 6072 is committed to being a fully inclusive institution which actively recruits staff from all sectors of society. It is proud to be an equal opportunities employer and encourages applications from everybody, regardless of race, sex, ethnicity, religion, nationality, sexual orientation, age, disability, gender identity, marital status/civil partnership, pregnancy and maternity, as well as being open to flexible working practices.
 Haw-Shiuan Chang, ZiYun Wang, Luke Vilnis, and Andrew McCallum. Distributional inclusion vector embedding for unsupervised hypernymy detection. In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACLHLT), pages 485–495, 2018.
 Goran Glavaˇs and Simone Paolo Ponzetto. Dual tensor model for detecting asymmetric lexico-semantic relations. In Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1757–1767, 2017.
 Goran Glavaˇs and Ivan Vuli´c. Explicit retrofitting of distributional word vectors. In 56th Annual Meeting of the Association for Computational Linguistics (ACL), volume 1, pages 34–45, 2018.
 Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. Recurrent convolutional neural networks for text classification. In 29th AAAI Conference on Artificial Intelligence (AAAI), volume 333, pages 2267–2273, 2015.
 Simon Moura, Amir Azarbaev, and Massih-Reza Amini. Heterogeneous dyadic multi-task learning with implicit feedback. In 25th International Conference on Neural Information Processing (ICONIP), 2018.
 Marek Rei, Daniela Gerz, and Ivan Vuli´c. Scoring lexical entailment with a supervised directional similarity network. In 56th Annual Meeting of the Association for Computational Linguistics (ACL), pages 638–643, 2018.
 Ivan Vulic and Nikola Mrksic. Specialising word vectors for lexical entailment. In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACLHLT), pages 1134–1145, 2018.
 Ekaterina Vylomova, Laura Rimell, Trevor Cohn, and Timothy Baldwin. Take and took, gaggle and goose, book and read: Evaluating the utility of vector differences for lexical relation learning. In 54th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1671–1682, 2016.
 Qi Wang, Tong Ruan, Yangming Zhou, Daqi Gao, and Ping He. An attention-based bi-gru-capsnet model for hypernymy detection between compound entities. arXiv preprint arXiv:1805.04827, 2018.
 Sergey Zagoruyko and Nikos Komodakis. Learning to compare image patches via convolutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4353–4361, 2015.