Araştırma Makalesi
BibTex RIS Kaynak Göster

Sinir Ağı ile Türkçe Eşgönderge Çözümleme

Yıl 2023, Cilt: 6 Sayı: 1, 85 - 95, 15.03.2023
https://doi.org/10.38016/jista.1225097

Öz

Eşgönderge çözümleme, bir metinde yer alan ve aynı temel varlığa gönderimde bulunan ifadelerin çözümlenmesiyle ilgilenir. Metin anlamanın vazgeçilmez bir unsuru olan bu zor iş, soru yanıtlama ve makine çevirisi gibi çeşitli dil işleme sistemlerinde önemli uygulamalara sahiptir. Eşgönderge çözümlemesine yönelik önemli sayıda çalışma olmasına rağmen, Türkçe üzerine yapılan araştırmalar sayıca azdır ve çoğunlukla zamir çözümlemesiyle sınırlı kalmıştır. Bildiğimiz kadarıyla, bu makale öğrenme tabanlı iki farklı modelin araştırıldığı ilk sinir ağı kullanılarak yürütülmüş Türkçe eşgönderge çözümleme çalışmasını sunmaktadır. Her iki model de ifade kümelerini oluştururken ifade sıralaması yaklaşımını takip etmektedir. İlk model, bir dizi önceden belirlenmiş özellikleri kullanırken, ikinci eşgönderge modeli, bir ifade ile onun aday öncül ifadeleri arasındaki benzerlikleri tespit için önceden eğitilmiş büyük ölçekli dil modellerinden öğrenilen kelime temsillerini kullanmaktadır. Türkçe için özel olarak eğitilmiş birçok dil modeli, kelime temsillerini elde etmek için kullanılmış ve bunları etkinlikleri yapılan deneylerde otomatik ölçütler kullanılarak karşılaştırılmıştır. Bu çalışma sonuçlarının, sinir ağı mimarilerinin Türkçe eşgönderge çözümlenmesine olası katkılarına ışık tuttuğu düşünülmektedir.

Kaynakça

  • Aloraini, A., Yu, J., Poesio, M., 2020. Neural Coreference Resolution for Arabic. The Third Workshop on Computational Models of Reference, Anaphora and Coreference, pp. 99-110.
  • Auliarachman, T., Purwarianti, A., 2019. Coreference Resolution System for Indonesian Text with Mention Pair Method and Singleton Exclusion using Convolutional Neural Network. ICAICTA2019, The International Conference of Advanced Informatics: Concepts, Theory and Applications, pp. 1-5.
  • Bagga, A., Baldwin, B., 1998. Algorithms for scoring coreference chains. LREC 1998, The 1st International Conference on Language Resources and Evaluation, pp. 563-566.
  • Bengtson, E., Roth, D., 2008. Understanding the Value of Features for Coreference Resolution. EMNLP 2008, The Conference on Empirical Methods in Natural Language Processing, pp. 294-303.
  • Bhattacharjee, S., Haque, R., de Buy Wenniger, G.M., Way, A., 2020. Investigating Query Expansion and Coreference Resolution in Question Answering on BERT. In: E. Métais, F. Meziane, H. Horacek, P. Cimiano (Eds.), Natural Language Processing and Information Systems, NLDB 2020, Lecture Notes in Computer Science, 12089 pp. 47-59. Springer, Cham. doi:10.1007/978-3-030-51310-8_5
  • Bunescu, R., 2012. Adaptive Clustering for Coreference Resolution with Deterministic Rules and Web-Based Language Models. Seml 2012, The First Joint Conference on Lexical and Computational Semantics, pp. 11–19.
  • Cai, J., Strube, M., 2010. End-to-End Coreference Resolution via Hypergraph Partitioning. Coling 2010, The 23rd International Conference on Computational Linguistics, pp. 143-151.
  • Clark, K. Manning, C.D., 2015. Entity-Centric Coreference Resolution with Model Stacking. ACL-IJCNLP 2015, The 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pp.1405-1415.
  • Clark, K., Luong, M.T., Le, Q.V., Manning, C.D., 2020. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. The ACL Workshop on Computational Approaches to Semitic Languages, pp. 63-70.
  • Denis, P., Baldridge, J., 2008. Specialized Models and Ranking for Coreference Resolution. The 2008 Conference on Empirical Methods in Natural Language Processing, pp. 660-669.
  • Devlin, J., Chang, M., Lee, K., Toutanova, K., 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. NAACL-HLT 2019, The Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4171-4186.
  • Durrett, G., Klein, D., 2013. Easy Victories and Uphill Battles in Coreference Resolution. EMNLP 2013, the 2013 Conference on Empirical Methods in Natural Language Processing, pp.1971-1982.
  • Fang, K., Jian, F., 2019. Incorporating Structural Information for Better Coreference Resolution. IJCAI 2019, The Twenty-Eighth International Joint Conference on Artificial Intelligence, pp. 5039-5045.
  • Fei, H., Li, X., Li, D., Li., P., 2019. End-to-end Deep Reinforcement Learning Based Coreference Resolution. The 57th Annual Meeting of the Association for Computational Linguistics the ACL Workshop on Computational Approaches to Semitic Languages, pp. 660-665.
  • Grave, E., Bojanowski, P., Gupta, P., Joulin, A., Mikolov, T., 2018. Learning Word Vectors for 157 Languages. LREC 2018, The International Conference on Language Resources and Evaluation.
  • Hacioglu, K., Douglas, B., Chen, Y., 2005. Detection of entity mentions occurring in English and Chinese text. Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pp. 379-386.
  • Hoste, V., 2016. The Mention-Pair Model. In: M. Poesio, R. Stuckardt, Y. Versley (Eds.) Anaphora Resolution. Theory and Applications of Natural Language Processing pp. 269-282 Springer, Berlin, Heidelberg. doi: 10.1007/978-3-662-47909-4_9
  • Kılıçaslan, Y., Güner, E.S., Yıldırım, S., 2009. Learning-based pronoun resolution for Turkish with a comparative evaluation, Computer Speech Language, 23(3), 311-331. doi: 10.1016/j.csl.2008.09.001
  • Klemen, M., Žitnik, S., 2022. Neural Coreference Resolution for Slovene Language. Computer Science and Information Systems, 19(2), 495–521. doi: 10.2298/CSIS201120060K
  • Kriman, S., Heng, J., 2021. Joint Detection and Coreference Resolution of Entities and Events with Document-level Context Aggregation. ACL-IJCNLP 2021, The 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop, pp. 174-179.
  • Lai, T.M., Bui, T., Kim, D.S., 2022. End-To-End Neural Coreference Resolution Revisited: A Simple Yet Effective Baseline. ICASSP 2022, The IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 8147-8151.
  • Lata, K., Singh, P., Dutta, K., 2022. Mention detection in coreference resolution: survey. Applied Intelligence, 52, 9816-9860. doi: 10.1007/s10489-021-02878-2
  • Lee, K., He, L., Lewis, M., Zettlemoyer, L., 2017. End-to-End Neural Coreference Resolution. EMNLP 2017, The 2017 Conference on Empirical Methods in Natural Language Processing, pp. 188-197.
  • Lee, K., He, L., Zettlemoyer, L., 2018. Higher-order coreference resolution with coarse-to-fine inference. NAACL 2018, The Annual Conference of the North American Chapter of the Association for Computational Linguistics, pp. 687-692.
  • Li, Z., Shi, K., Chen N.F., 2021. Coreference-Aware Dialogue Summarization. The 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp. 509-519.
  • Luo, X., Ittycheriah, A., Jing, H., Kambhatla, N., Roukos. S., 2004. A mention-synchronous coreference resolution algorithm based on the Bell tree. ACL 2004, The 42nd Annual Meeting on Association for Computational Linguistics, pp. 135-142.
  • Luo, X., 2005. On Coreference Resolution Performance Metrics. Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pp. 25-32.
  • Mars, M., 2022. From Word Embeddings to Pre-Trained Language Models: A State-of-the-Art Walkthrough. Applied Sciences. 12(17). doi: 10.3390/app12178805
  • Miaschi, A., Dell’Orletta, F., 2020. Contextual and Non-Contextual Word Embeddings: an in-depth Linguistic Investigation. The 5th Workshop on Representation Learning for NLP, pp. 110-119.
  • Niton, B., Morawiecki, P., Ogrodniczuk, M., 2018. Deep Neural Networks for Coreference Resolution for Polish. LREC 2018, The International Conference on Language Resources and Evaluation, pp. 395-400.
  • Pamay, T., Eryiğit, G., 2018. Turkish Coreference Resolution. INISTA 2018, The Innovations in Intelligent Systems and Applications, pp. 1-7.
  • Park, C., Lee, C., 2015. Mention Detection using Bidirectional LSTM-CRF Model. The Annual Conference on Human and Language Technology, pp. 224-227.
  • Park, C., Lee, C., Lim, S., 2017. Mention detection using pointer networks for coreference resolution. ETRI Journal, 39(5), 652-661. doi: 10.4218/etrij.17.0117.0140
  • Rahman, A., Ng, V., 2011. Narrowing the Modeling Gap: A Cluster-Ranking Approach to Coreference Resolution. Journal of Artificial Intelligence Research, 40, 469-521. doi: 10.1613/jair.3120
  • Rahman, A., Ng, V., 2011. Ensemble-based coreference resolution. IJCAI 2011, The Twenty-Second international joint conference on Artificial Intelligence, pp. 1884-1889.
  • Sahlani, H., Hourali, M., Minaei-Bidgoli, B., 2020. Coreference Resolution Using Semantic Features and Fully Connected Neural Network in the Persian Language. International Journal of Computational Intelligence Systems, 13(1), 1002-1013. doi: 10.2991/ijcis.d.200706.002
  • Sanh, V., Debut, L., Chaumond, J., Wolf, T., 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter, arXiv, doi: 10.48550/ARXIV.1910.01108 10.48550/arXiv.1706.01863
  • Sapena, E., Padró, L., Turmo, J., 2010. Relaxcor: A global relaxation labeling approach to coreference resolution. The 5th International Workshop on Semantic Evaluation, pp. 88-91.
  • Say, B., Zeyrek, D., Oflazer, K., Özge, U., 2002. Development of a corpus and a treebank for present-day written Turkish. ICTIL 2002, The 11th International Conference of Turkish Linguistics , pp. 183-192.
  • Schüller, P., Cıngıllı, K., Tunçer, F., Sürmeli, B. G., Pekel, A., Karatay, A. H., Karakaş, H. E., 2007. Marmara Turkish coreference corpus and coreference resolution baseline, arXiv, doi: 10.48550/arXiv.1706.01863
  • Schweterr, S., 2021. BERTurk. https://github.com/stefan-it/turkish-bert
  • Soon, W.M., Ng, H.T., Lim, D.C.Y., 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4), 521–544. doi: 10.1162/089120101753342653
  • Soraluze, A., Arregi, O., Arregi, X., Ceberio, K., De Ilarraza, A.D., 2012. Mention detection: First steps in the development of a Basque coreference resolution system. KONVENS 2012, The 11th Conference on Natural Language Processing, pp.128-136.
  • Steinberger, J., Poesio, M., Kabadjov, M.A., Ježek, K., 2007. Two uses of anaphora resolution in summarization. Information Processing Management, 43(6), 1663-1680. doi: 10.1016/j.ipm.2007.01.010
  • Uryupina, O., Moschitti, A., 2015. A State-of-the-Art Mention-Pair Model for Coreference Resolution. The Fourth Joint Conference on Lexical and Computational Semantics, pp. 289-298.
  • Van der Heijden, N., Abnar, S., Shutova, E., 2020. A Comparison of Architectures and Pretraining Methods for Contextualized Multilingual Word Embedding, pp. 9090-9097.
  • Vilain, M., Burger, J., Aberdeen, J., Connolly, D., Hirschman, L., 1995. A model-theoretic coreference scoring scheme. The 6th Message Understanding Conference (MUC-6), pp. 45-52.
  • Wang, B., Lu, W., Wang, Y., Jin, H., 2018. A Neural Transition-based Model for Nested Mention Recognition. The 2018 Conference on Empirical Methods in Natural Language Processing, pp. 1011-1017.
  • Wiseman, S., Rush, A.M., Shieber, S., Wetson, J., 2015. Learning Anaphoricity and Antecedent Ranking Features for Coreference Resolution. ACL-IJCNLP 2015, The 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing , pp. 1416-1426.
  • Wiseman, S., Rush, A. M., Shieber, S. M., 2016. Learning Global Features for Coreference Resolution. The Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 994-1004.
  • Yang, X., Su, J., Lang, J., Tan, C. L., Liu, T., Li., S., 2008. An Entity-Mention Model for Coreference Resolution with Inductive Logic Programming. ACL08-HLT, The 46th Annual Meeting of the Association for Computational Linguistics and the Human Language Technology Conference , pp. 843-851.
  • Yıldırım, S., Kılıçaslan, Y., Yıldız, T., 2009. Pronoun Resolution in Turkish Using Decision Tree and Rule-Based Learning Algorithms. In: Z. Vetulani, H. Uszkoreit (Eds.), Human Language Technology, Challenges of the Information Society, LTC 2007, Lecture Notes in Computer Science, 5603, pp. 270-278) Springer. doi: 10.1007/978-3-642-04235-5_23
  • Yu, Y., Zuo, S., Jiang, H., Ren, W., Zhao, T., Zhang, C., 2021. Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach, The Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1063-1077.
  • Zhang, R., Nogueira dos Santos, C., Yasunaga, M., Xiang, B., Radev, D., 2018. Neural Coreference Resolution with Deep Biaffine Attention by Joint Mention Detection and Mention Clustering. The 56th Annual Meeting of the Association for Computational Linguistics, pp. 102-107.
  • Zhang, T., Wu, F., Katiyar, A., Weinberger, K. Q., Artzi, Y., 2021. Revisiting Few-sample BERT Fine-tuning. The 9th International Conference on Learning Representations.
  • Zitouni, I., Sorensen, J., Luo, X., Florian, R., 2005. The impact of morphological stemming on Arabic mention detection and coreference resolution. The ACL Workshop on Computational Approaches to Semitic Languages, pp. 63-70.

Neural Coreference Resolution for Turkish

Yıl 2023, Cilt: 6 Sayı: 1, 85 - 95, 15.03.2023
https://doi.org/10.38016/jista.1225097

Öz

Coreference resolution deals with resolving mentions of the same underlying entity in a given text. This challenging task is an indispensable aspect of text understanding and has important applications in various language processing systems such as question answering and machine translation. Although a significant amount of studies is devoted to coreference resolution, the research on Turkish is scarce and mostly limited to pronoun resolution. To our best knowledge, this article presents the first neural Turkish coreference resolution study where two learning-based models are explored. Both models follow the mention-ranking approach while forming clusters of mentions. The first model uses a set of hand-crafted features whereas the second coreference model relies on embeddings learned from large-scale pre-trained language models for capturing similarities between a mention and its candidate antecedents. Several language models trained specifically for Turkish are used to obtain mention representations and their effectiveness is compared in conducted experiments using automatic metrics. We argue that the results of this study shed light on the possible contributions of neural architectures to Turkish coreference resolution.

Kaynakça

  • Aloraini, A., Yu, J., Poesio, M., 2020. Neural Coreference Resolution for Arabic. The Third Workshop on Computational Models of Reference, Anaphora and Coreference, pp. 99-110.
  • Auliarachman, T., Purwarianti, A., 2019. Coreference Resolution System for Indonesian Text with Mention Pair Method and Singleton Exclusion using Convolutional Neural Network. ICAICTA2019, The International Conference of Advanced Informatics: Concepts, Theory and Applications, pp. 1-5.
  • Bagga, A., Baldwin, B., 1998. Algorithms for scoring coreference chains. LREC 1998, The 1st International Conference on Language Resources and Evaluation, pp. 563-566.
  • Bengtson, E., Roth, D., 2008. Understanding the Value of Features for Coreference Resolution. EMNLP 2008, The Conference on Empirical Methods in Natural Language Processing, pp. 294-303.
  • Bhattacharjee, S., Haque, R., de Buy Wenniger, G.M., Way, A., 2020. Investigating Query Expansion and Coreference Resolution in Question Answering on BERT. In: E. Métais, F. Meziane, H. Horacek, P. Cimiano (Eds.), Natural Language Processing and Information Systems, NLDB 2020, Lecture Notes in Computer Science, 12089 pp. 47-59. Springer, Cham. doi:10.1007/978-3-030-51310-8_5
  • Bunescu, R., 2012. Adaptive Clustering for Coreference Resolution with Deterministic Rules and Web-Based Language Models. Seml 2012, The First Joint Conference on Lexical and Computational Semantics, pp. 11–19.
  • Cai, J., Strube, M., 2010. End-to-End Coreference Resolution via Hypergraph Partitioning. Coling 2010, The 23rd International Conference on Computational Linguistics, pp. 143-151.
  • Clark, K. Manning, C.D., 2015. Entity-Centric Coreference Resolution with Model Stacking. ACL-IJCNLP 2015, The 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pp.1405-1415.
  • Clark, K., Luong, M.T., Le, Q.V., Manning, C.D., 2020. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. The ACL Workshop on Computational Approaches to Semitic Languages, pp. 63-70.
  • Denis, P., Baldridge, J., 2008. Specialized Models and Ranking for Coreference Resolution. The 2008 Conference on Empirical Methods in Natural Language Processing, pp. 660-669.
  • Devlin, J., Chang, M., Lee, K., Toutanova, K., 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. NAACL-HLT 2019, The Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4171-4186.
  • Durrett, G., Klein, D., 2013. Easy Victories and Uphill Battles in Coreference Resolution. EMNLP 2013, the 2013 Conference on Empirical Methods in Natural Language Processing, pp.1971-1982.
  • Fang, K., Jian, F., 2019. Incorporating Structural Information for Better Coreference Resolution. IJCAI 2019, The Twenty-Eighth International Joint Conference on Artificial Intelligence, pp. 5039-5045.
  • Fei, H., Li, X., Li, D., Li., P., 2019. End-to-end Deep Reinforcement Learning Based Coreference Resolution. The 57th Annual Meeting of the Association for Computational Linguistics the ACL Workshop on Computational Approaches to Semitic Languages, pp. 660-665.
  • Grave, E., Bojanowski, P., Gupta, P., Joulin, A., Mikolov, T., 2018. Learning Word Vectors for 157 Languages. LREC 2018, The International Conference on Language Resources and Evaluation.
  • Hacioglu, K., Douglas, B., Chen, Y., 2005. Detection of entity mentions occurring in English and Chinese text. Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pp. 379-386.
  • Hoste, V., 2016. The Mention-Pair Model. In: M. Poesio, R. Stuckardt, Y. Versley (Eds.) Anaphora Resolution. Theory and Applications of Natural Language Processing pp. 269-282 Springer, Berlin, Heidelberg. doi: 10.1007/978-3-662-47909-4_9
  • Kılıçaslan, Y., Güner, E.S., Yıldırım, S., 2009. Learning-based pronoun resolution for Turkish with a comparative evaluation, Computer Speech Language, 23(3), 311-331. doi: 10.1016/j.csl.2008.09.001
  • Klemen, M., Žitnik, S., 2022. Neural Coreference Resolution for Slovene Language. Computer Science and Information Systems, 19(2), 495–521. doi: 10.2298/CSIS201120060K
  • Kriman, S., Heng, J., 2021. Joint Detection and Coreference Resolution of Entities and Events with Document-level Context Aggregation. ACL-IJCNLP 2021, The 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop, pp. 174-179.
  • Lai, T.M., Bui, T., Kim, D.S., 2022. End-To-End Neural Coreference Resolution Revisited: A Simple Yet Effective Baseline. ICASSP 2022, The IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 8147-8151.
  • Lata, K., Singh, P., Dutta, K., 2022. Mention detection in coreference resolution: survey. Applied Intelligence, 52, 9816-9860. doi: 10.1007/s10489-021-02878-2
  • Lee, K., He, L., Lewis, M., Zettlemoyer, L., 2017. End-to-End Neural Coreference Resolution. EMNLP 2017, The 2017 Conference on Empirical Methods in Natural Language Processing, pp. 188-197.
  • Lee, K., He, L., Zettlemoyer, L., 2018. Higher-order coreference resolution with coarse-to-fine inference. NAACL 2018, The Annual Conference of the North American Chapter of the Association for Computational Linguistics, pp. 687-692.
  • Li, Z., Shi, K., Chen N.F., 2021. Coreference-Aware Dialogue Summarization. The 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp. 509-519.
  • Luo, X., Ittycheriah, A., Jing, H., Kambhatla, N., Roukos. S., 2004. A mention-synchronous coreference resolution algorithm based on the Bell tree. ACL 2004, The 42nd Annual Meeting on Association for Computational Linguistics, pp. 135-142.
  • Luo, X., 2005. On Coreference Resolution Performance Metrics. Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pp. 25-32.
  • Mars, M., 2022. From Word Embeddings to Pre-Trained Language Models: A State-of-the-Art Walkthrough. Applied Sciences. 12(17). doi: 10.3390/app12178805
  • Miaschi, A., Dell’Orletta, F., 2020. Contextual and Non-Contextual Word Embeddings: an in-depth Linguistic Investigation. The 5th Workshop on Representation Learning for NLP, pp. 110-119.
  • Niton, B., Morawiecki, P., Ogrodniczuk, M., 2018. Deep Neural Networks for Coreference Resolution for Polish. LREC 2018, The International Conference on Language Resources and Evaluation, pp. 395-400.
  • Pamay, T., Eryiğit, G., 2018. Turkish Coreference Resolution. INISTA 2018, The Innovations in Intelligent Systems and Applications, pp. 1-7.
  • Park, C., Lee, C., 2015. Mention Detection using Bidirectional LSTM-CRF Model. The Annual Conference on Human and Language Technology, pp. 224-227.
  • Park, C., Lee, C., Lim, S., 2017. Mention detection using pointer networks for coreference resolution. ETRI Journal, 39(5), 652-661. doi: 10.4218/etrij.17.0117.0140
  • Rahman, A., Ng, V., 2011. Narrowing the Modeling Gap: A Cluster-Ranking Approach to Coreference Resolution. Journal of Artificial Intelligence Research, 40, 469-521. doi: 10.1613/jair.3120
  • Rahman, A., Ng, V., 2011. Ensemble-based coreference resolution. IJCAI 2011, The Twenty-Second international joint conference on Artificial Intelligence, pp. 1884-1889.
  • Sahlani, H., Hourali, M., Minaei-Bidgoli, B., 2020. Coreference Resolution Using Semantic Features and Fully Connected Neural Network in the Persian Language. International Journal of Computational Intelligence Systems, 13(1), 1002-1013. doi: 10.2991/ijcis.d.200706.002
  • Sanh, V., Debut, L., Chaumond, J., Wolf, T., 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter, arXiv, doi: 10.48550/ARXIV.1910.01108 10.48550/arXiv.1706.01863
  • Sapena, E., Padró, L., Turmo, J., 2010. Relaxcor: A global relaxation labeling approach to coreference resolution. The 5th International Workshop on Semantic Evaluation, pp. 88-91.
  • Say, B., Zeyrek, D., Oflazer, K., Özge, U., 2002. Development of a corpus and a treebank for present-day written Turkish. ICTIL 2002, The 11th International Conference of Turkish Linguistics , pp. 183-192.
  • Schüller, P., Cıngıllı, K., Tunçer, F., Sürmeli, B. G., Pekel, A., Karatay, A. H., Karakaş, H. E., 2007. Marmara Turkish coreference corpus and coreference resolution baseline, arXiv, doi: 10.48550/arXiv.1706.01863
  • Schweterr, S., 2021. BERTurk. https://github.com/stefan-it/turkish-bert
  • Soon, W.M., Ng, H.T., Lim, D.C.Y., 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4), 521–544. doi: 10.1162/089120101753342653
  • Soraluze, A., Arregi, O., Arregi, X., Ceberio, K., De Ilarraza, A.D., 2012. Mention detection: First steps in the development of a Basque coreference resolution system. KONVENS 2012, The 11th Conference on Natural Language Processing, pp.128-136.
  • Steinberger, J., Poesio, M., Kabadjov, M.A., Ježek, K., 2007. Two uses of anaphora resolution in summarization. Information Processing Management, 43(6), 1663-1680. doi: 10.1016/j.ipm.2007.01.010
  • Uryupina, O., Moschitti, A., 2015. A State-of-the-Art Mention-Pair Model for Coreference Resolution. The Fourth Joint Conference on Lexical and Computational Semantics, pp. 289-298.
  • Van der Heijden, N., Abnar, S., Shutova, E., 2020. A Comparison of Architectures and Pretraining Methods for Contextualized Multilingual Word Embedding, pp. 9090-9097.
  • Vilain, M., Burger, J., Aberdeen, J., Connolly, D., Hirschman, L., 1995. A model-theoretic coreference scoring scheme. The 6th Message Understanding Conference (MUC-6), pp. 45-52.
  • Wang, B., Lu, W., Wang, Y., Jin, H., 2018. A Neural Transition-based Model for Nested Mention Recognition. The 2018 Conference on Empirical Methods in Natural Language Processing, pp. 1011-1017.
  • Wiseman, S., Rush, A.M., Shieber, S., Wetson, J., 2015. Learning Anaphoricity and Antecedent Ranking Features for Coreference Resolution. ACL-IJCNLP 2015, The 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing , pp. 1416-1426.
  • Wiseman, S., Rush, A. M., Shieber, S. M., 2016. Learning Global Features for Coreference Resolution. The Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 994-1004.
  • Yang, X., Su, J., Lang, J., Tan, C. L., Liu, T., Li., S., 2008. An Entity-Mention Model for Coreference Resolution with Inductive Logic Programming. ACL08-HLT, The 46th Annual Meeting of the Association for Computational Linguistics and the Human Language Technology Conference , pp. 843-851.
  • Yıldırım, S., Kılıçaslan, Y., Yıldız, T., 2009. Pronoun Resolution in Turkish Using Decision Tree and Rule-Based Learning Algorithms. In: Z. Vetulani, H. Uszkoreit (Eds.), Human Language Technology, Challenges of the Information Society, LTC 2007, Lecture Notes in Computer Science, 5603, pp. 270-278) Springer. doi: 10.1007/978-3-642-04235-5_23
  • Yu, Y., Zuo, S., Jiang, H., Ren, W., Zhao, T., Zhang, C., 2021. Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach, The Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1063-1077.
  • Zhang, R., Nogueira dos Santos, C., Yasunaga, M., Xiang, B., Radev, D., 2018. Neural Coreference Resolution with Deep Biaffine Attention by Joint Mention Detection and Mention Clustering. The 56th Annual Meeting of the Association for Computational Linguistics, pp. 102-107.
  • Zhang, T., Wu, F., Katiyar, A., Weinberger, K. Q., Artzi, Y., 2021. Revisiting Few-sample BERT Fine-tuning. The 9th International Conference on Learning Representations.
  • Zitouni, I., Sorensen, J., Luo, X., Florian, R., 2005. The impact of morphological stemming on Arabic mention detection and coreference resolution. The ACL Workshop on Computational Approaches to Semitic Languages, pp. 63-70.
Toplam 56 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Yapay Zeka
Bölüm Araştırma Makalesi
Yazarlar

Şeniz Demir 0000-0003-4897-4616

Erken Görünüm Tarihi 27 Aralık 2022
Yayımlanma Tarihi 15 Mart 2023
Gönderilme Tarihi 27 Aralık 2022
Yayımlandığı Sayı Yıl 2023 Cilt: 6 Sayı: 1

Kaynak Göster

APA Demir, Ş. (2023). Neural Coreference Resolution for Turkish. Journal of Intelligent Systems: Theory and Applications, 6(1), 85-95. https://doi.org/10.38016/jista.1225097
AMA Demir Ş. Neural Coreference Resolution for Turkish. jista. Mart 2023;6(1):85-95. doi:10.38016/jista.1225097
Chicago Demir, Şeniz. “Neural Coreference Resolution for Turkish”. Journal of Intelligent Systems: Theory and Applications 6, sy. 1 (Mart 2023): 85-95. https://doi.org/10.38016/jista.1225097.
EndNote Demir Ş (01 Mart 2023) Neural Coreference Resolution for Turkish. Journal of Intelligent Systems: Theory and Applications 6 1 85–95.
IEEE Ş. Demir, “Neural Coreference Resolution for Turkish”, jista, c. 6, sy. 1, ss. 85–95, 2023, doi: 10.38016/jista.1225097.
ISNAD Demir, Şeniz. “Neural Coreference Resolution for Turkish”. Journal of Intelligent Systems: Theory and Applications 6/1 (Mart 2023), 85-95. https://doi.org/10.38016/jista.1225097.
JAMA Demir Ş. Neural Coreference Resolution for Turkish. jista. 2023;6:85–95.
MLA Demir, Şeniz. “Neural Coreference Resolution for Turkish”. Journal of Intelligent Systems: Theory and Applications, c. 6, sy. 1, 2023, ss. 85-95, doi:10.38016/jista.1225097.
Vancouver Demir Ş. Neural Coreference Resolution for Turkish. jista. 2023;6(1):85-9.

Zeki Sistemler Teori ve Uygulamaları Dergisi