Research Article
BibTex RIS Cite

TÜRKÇE SAĞLIK DANIŞMANLIĞINDA BÜYÜK DİL MODELLERİNİN HASTA-DOKTOR İLETİŞİMİNDE KULLANIM POTANSİYELİ

Year 2025, Volume: 28 Issue: 2, 802 - 822, 03.06.2025

Abstract

Bu çalışma, Türkçe sağlık danışmanlığında kullanılan dört farklı büyük dil modelinin (doktor-meta-llama-3-8b, doktor-LLama2-sambanovasystems-7b, doktor-Mistral-trendyol-7b ve doktor-llama-3-cosmos-8b) performansını değerlendirmektedir. Modeller, 321.179 hasta-doktor soru-cevap çiftinden oluşan Patient Doctor Q&A TR 321179 veri kümesi üzerinde ince ayar yapılarak eğitilmiştir. Performans ölçümünde BLEU ve BERT skor gibi sentetik değerlendirmelerin yanı sıra, Elo puanlaması ile uzman doktorların yanıt kalitesi incelemeleri de kullanılmıştır. Sonuçlar, doktor-LLama2-sambanovasystems-7b modelinin genel başarı bakımından en iyi performansı sergilediğini göstermiş, bu model uzman doktor incelemelerinden de 3.25 puan almıştır. Öte yandan, doktor-Mistral-trendyol-7b modeli %18,4 ile en düşük zararlı yanıt oranına sahip model olarak öne çıkmıştır. Bu çalışma, Türkçe sağlık hizmetlerinde yapay zekâ destekli sanal doktor asistanlarının potansiyelini göstermekte ve dile özgü modellerin geliştirilmesinin önemini vurgulamaktadır.

References

  • Akyon, F. C., Cavusoglu, D., Cengiz, C., Altinuc, S. O., & Temizel, A. (2021). Automated question generation and question answering from Turkish texts. arXiv preprint arXiv:2111.06476.
  • Anthropic. (2024). Claude: A New AI Assistant by Anthropic. Erişim adresi: https://www.anthropic.com/news/claude-3-5-sonnet (Erişim tarihi 16/08/2024).
  • LMSYS Chatbot Arena Leaderboard. (2024). LMSYS Chatbot Arena Leaderboard. Retrieved August 16, 2024, from https://chat.lmsys.org/?leaderboard
  • Avaliev, A. (2024). Chat Doctor Dataset. Erişim adresi: https://huggingface.co/datasets/avaliev/chat_doctor (Erişim tarihi 18/08/2024).
  • Bayram, M. A. (2024). Türkçe Tıbbi Soru-Cevap Veri Seti [Veri seti]. https://doi.org/10.5281/zenodo.12770916 (Erişim adresi: https://zenodo.org/record/12770916).
  • Brown, T. B. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.
  • Bulut, M. K. (2024a). Patient Doctor Q&A TR 321179. https://doi.org/10.5281/zenodo.12798934 (Erişim adresi: https://doi.org/10.5281/zenodo.12798934).
  • Bulut, M. K. (2024b). Patient Doctor Q&A TR 5695. Erişim adresi: https://huggingface.co/datasets/kayrab/patient-doctor-qa-tr-5695.
  • Bulut, M. K. (2024c). Patient Doctor Q&A TR 95588. Erişim adresi: https://huggingface.co/datasets/kayrab/patient-doctor-qa-tr-95588.
  • Bulut, M. K. (2024d). Patient Doctor Q&A TR 19583. Erişim adresi: https://huggingface.co/datasets/kayrab/patient-doctor-qa-tr-19583.
  • Bulut, M. K. (2024e). Patient Doctor Q&A TR 167732. Erişim adresi: https://huggingface.co/datasets/kayrab/patient-doctor-qa-tr-167732.
  • Bulut, M. K., & Diri, B. (2024f). Artificial Intelligence Revolution in Turkish Health Consultancy: Development of LLM-Based Virtual Doctor Assistants. In 2024 8th International Artificial Intelligence and Data Processing Symposium (IDAP) (pp. 1–6). IEEE.
  • Chikhaoui, E., Alajmi, A., & Larabi-Marie-Sainte, S. (2022). Artificial intelligence applications in healthcare sector: ethical and legal challenges. Emerging Science Journal, 6(4), 717–738.
  • Chiang, W.-L., Li, Z., Lin, Z., Sheng, Y., Wu, Z., Zhang, H., Zheng, L., Zhuang, S., Zhuang, Y., & Zhou, D. (2024). Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference. arXiv preprint arXiv:2403.04132 [cs.AI].
  • Chen, Y., Nayman, N., Greenfeld, D., Gal, Y., & Berant, J. (2022). Towards learning universal hyperparameter optimizers with transformers. Advances in Neural Information Processing Systems, 35, 32053–32068.
  • Dettmers, T., Lewis, M., Shleifer, S., & Zettlemoyer, L. (2021). 8-bit optimizers via block-wise quantization. arXiv preprint arXiv:2110.02861.
  • Devlin, J. (2018). BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
  • Elo, A. E., & Sloan, S. (1978). The rating of chessplayers: Past and present. New York: Arco Pub.
  • Fan, Z., Tang, J., Chen, W., Wang, S., Wei, Z., Xi, J., ... & Zhou, J. (2024). AI Hospital: Benchmarking large language models in a multi-agent medical interaction simulator. arXiv preprint arXiv:2402.09742.
  • Google. (2024a). Gemini: Google’s AI Model for Multimodal Understanding. Erişim adresi: https://deepmind.google/technologies/gemini/pro/ (Erişim tarihi 16/08/2024).
  • Google. (2024b). Google Colab. Erişim adresi: https://colab.google/ (Erişim tarihi 08/09/2024).
  • Güneş, Y. C., & Ülkir, M. (2024). Comparative Performance Evaluation of Multimodal Large Language Models, Radiologist, and Anatomist in Visual Neuroanatomy Questions. Uludağ Üniversitesi Tıp Fakültesi Dergisi, 50(3), 551-556.
  • Henry41. (2024). iCliniq Medical QA Dataset. Erişim adresi: https://www.kaggle.com/datasets/henry41148/icliniq-medical-qa.
  • Hermansyah, I. D. (2024). Doctor-ID-QA Dataset. Erişim adresi: https://huggingface.co/datasets/hermanshid/doctor-id-qa.
  • Hoffmann, J., Borgeaud, S., Mensch, A., Buchatskaya, E., Cai, T., Rutherford, E., de Las Casas, D., Hendricks, L. A., Welbl, J., Clark, A., Hennigan, T., Noland, E., Millican, K., van den Driessche, G., Damoc, B., Guy, A., Osindero, S., Simonyan, K., Elsen, E., Rae, J. W., Vinyals, O., & Sifre, L. (2022). Training compute-optimal large language models. arXiv preprint arXiv:2203.15556.
  • Kesgin, H. T., Yuce, M. K., Dogan, E., Uzun, M. E., Uz, A., Seyrek, H. E., Zeer, A., & Amasyali, M. F. (2024). Introducing cosmosGPT: Monolingual Training for Turkish Language Models.
  • Labrak, Y., Bazoge, A., Morin, E., Gourraud, P. A., Rouvier, M., & Dufour, R. (2024). Biomistral: A collection of open-source pretrained large language models for medical domains. arXiv preprint arXiv:2402.10373.
  • Li, J., Lai, Y., Li, W., Ren, J., Zhang, M., Kang, X., ... & Liu, Y. (2024). Agent hospital: A simulacrum of hospital with evolvable medical agents. arXiv preprint arXiv:2405.02957.
  • Matsumoto, M., & Nishimura, T. (1998). Mersenne twister: a 623-dimensionally equidistributed uniform pseudo-random number generator. ACM Transactions on Modeling and Computer Simulation (TOMACS), 8(1), 3-30.
  • Meta AI. (2024). LLaMA 3.1: Meta’s Next-Generation Large Language Model. Erişim adresi: https://huggingface.co/meta-llama/Meta-Llama-3.1-70B (Erişim tarihi 08/08/2024).
  • meta-llama. (2024). meta-llama/Meta-Llama-3-8B. Erişim adresi: https://huggingface.co/meta-llama/Meta-Llama-3-8B (Erişim tarihi 16/08/2024).
  • Microsoft. (2024). GitHub Copilot: AI-Powered Code Completion by Microsoft. Erişim adresi: https://copilot.microsoft.com/ (Erişim tarihi 16/08/2024).
  • NVIDIA. (2024). NVIDIA A100 Tensor Core GPU. Erişim adresi: https://www.nvidia.com/tr-tr/data-center/a100/ (Erişim tarihi 08/08/2024).
  • Oğul, İ. Ü., Soygazi, F., & Bostanoğlu, B. E. (2025). TurkMedNLI: a Turkish medical natural language inference dataset through large language model based translation. PeerJ Computer Science, 11, e2662.
  • OpenAI. (2024a). GPT-3.5 Turbo. Erişim adresi: https://platform.openai.com/docs/models/gpt-3-5-turbo (Erişim tarihi 15/07/2024).
  • OpenAI. (2024b). GPT-4o: OpenAI’s Language Model. Erişim adresi: https://openai.com/index/hello-gpt-4o/ (Erişim tarihi 16/08/2024).
  • OpenAI. (2024c). GPT-4: OpenAI’s Language Model. Erişim adresi: https://openai.com/index/gpt-4/ (Erişim tarihi 21/08/2024).
  • Park, C.-W., Seo, S. W., Kang, N., Ko, B., Choi, B. W., Park, C. M., Chang, D. K., Kim, H., Kim, H., Lee, H., Jang, J., Ye, J. C., Jeon, J. H., Seo, J. B., Kim, K. J., Jung, K.-H., Kim, N., Paek, S., Shin, S.-Y., ... Yoon, H.-J. (2020). Artificial intelligence in health care: Current applications and issues. Journal of Korean Medical Science, 35(42), e379. https://doi.org/10.3346/jkms.2020.35.e379
  • Peng, Y., Yan, S., & Lu, Z. (2019). Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474.
  • Sambanovasystems. (2024). sambanovasystems/SambaLingo-Turkish-Chat. Erişim adresi: https://huggingface.co/sambanovasystems/SambaLingo-Turkish-Chat (Erişim tarihi 16/08/2024).
  • Singhal, K., Tu, T., Gottweis, J., Sayres, R., Wulczyn, E., Amin, M., ... & Natarajan, V. (2025). Toward expert-level medical question answering with large language models. Nature Medicine, 1-8.
  • Trendyol. (2024). Trendyol/Trendyol-LLM-7b-chat-v1.8. Erişim adresi: https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-v1.8 (Erişim tarihi 16/08/2024).
  • Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., Bhosale, S., Bikel, D., Blecher, L., Canton Ferrer, C., Chen, M., Cucurull, G., Esiobu, D., Fernandes, J., Fu, J., Fu, W., Fuller, B., Gao, C., Goswami, V., Goyal, N., Hartshorn, A., Hosseini, S., Hou, R., Inan, H., Kardas, M., Kerkez, V., Khabsa, M., Kloumann, I., Korenev, A., Koura, P. S., Lachaux, M.-A., Lavril, T., Lee, J., Liskovich, D., Lu, Y., Mao, Y., Martinet, X., Mihaylov, T., Mishra, P., Molybog, I., Nie, Y., Poulton, A., Reizenstein, J., Rungta, R., Saladi, K., Schelten, A., Silva, R., Smith, E. M., Subramanian, R., Tan, X. E., Tang, B., Taylor, R., Williams, A., Kuan, J. X., Xu, P., Yan, Z., Zarov, I., Zhang, Y., Fan, A., Kambadur, M., Narang, S., Rodriguez, A., Stojnic, R., Edunov, S., & Scialom, T. (2023). Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
  • Ucar, A., Nayak, S., Roy, A., Taşcı, B., & Taşcı, G. (2025). A Comprehensive Study on Fine-Tuning Large Language Models for Medical Question Answering Using Classification Models and Comparative Analysis. arXiv preprint arXiv:2501.17190.
  • Unsloth. (2024). Unsloth: Finetune Llama 3.1, Mistral, Phi & Gemma LLMs 2–5x faster with 80% less memory. Erişim adresi: https://github.com/unslothai/unsloth (Erişim tarihi 08/08/2024).
  • Vaswani, A. (2017). Attention is all you need. Advances in Neural Information Processing Systems.
  • Wu, S., & Sun, M. (2022). Exploring the efficacy of pre-trained checkpoints in text-to-music generation task. arXiv preprint arXiv:2211.11216.
  • Yıldız, M. S., & Alper, A. (2023). Can ChatGPT-4 diagnose in Turkish: a comparison of ChatGPT responses to health-related questions in English and Turkish. Journal of Consumer Health on the Internet, 27(3), 294-307.
  • ytu-ce-cosmos. (2024). ytu-ce-cosmos/Turkish-Llama-8b-v0.1. Erişim adresi: https://huggingface.co/ytu-ce-cosmos/Turkish-Llama-8b-v0.1 (Erişim tarihi 16/08/2024).
  • Zhang, T., Kishore, V., Wu, F., Weinberger, K. Q., & Artzi, Y. (2019). BERTScore: Evaluating text generation with BERT. arXiv preprint arXiv:1904.09675.

THE POTENTIAL USE OF LARGE LANGUAGE MODELS IN PATIENT-DOCTOR COMMUNICATION IN TURKISH HEALTH CONSULTATION

Year 2025, Volume: 28 Issue: 2, 802 - 822, 03.06.2025

Abstract

This study evaluates the performance of four different large language models used in Turkish healthcare consultancy: doctor-meta-llama-3-8b, doctor-LLama2-sambanovasystems-7b, doctor-Mistral-trendyol-7b, and doctor-llama-3-cosmos-8b. The models were fine-tuned using the Patient Doctor Q&A TR 321179 dataset, which consists of 321,179 patient-doctor question-answer pairs. Performance was measured using synthetic evaluations such as BLEU and BERT scores, as well as expert doctor reviews of response quality through Elo scoring. The results showed that the doctor-LLama2-sambanovasystems-7b model demonstrated the best overall performance, receiving a score of 3.25 from expert doctor evaluations. On the other hand, the doctor-Mistral-trendyol-7b model stood out with the lowest harmful response rate at 18.4%. This study highlights the potential of AI-powered virtual doctor assistants in Turkish healthcare services and emphasizes the importance of developing language-specific models.

References

  • Akyon, F. C., Cavusoglu, D., Cengiz, C., Altinuc, S. O., & Temizel, A. (2021). Automated question generation and question answering from Turkish texts. arXiv preprint arXiv:2111.06476.
  • Anthropic. (2024). Claude: A New AI Assistant by Anthropic. Erişim adresi: https://www.anthropic.com/news/claude-3-5-sonnet (Erişim tarihi 16/08/2024).
  • LMSYS Chatbot Arena Leaderboard. (2024). LMSYS Chatbot Arena Leaderboard. Retrieved August 16, 2024, from https://chat.lmsys.org/?leaderboard
  • Avaliev, A. (2024). Chat Doctor Dataset. Erişim adresi: https://huggingface.co/datasets/avaliev/chat_doctor (Erişim tarihi 18/08/2024).
  • Bayram, M. A. (2024). Türkçe Tıbbi Soru-Cevap Veri Seti [Veri seti]. https://doi.org/10.5281/zenodo.12770916 (Erişim adresi: https://zenodo.org/record/12770916).
  • Brown, T. B. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.
  • Bulut, M. K. (2024a). Patient Doctor Q&A TR 321179. https://doi.org/10.5281/zenodo.12798934 (Erişim adresi: https://doi.org/10.5281/zenodo.12798934).
  • Bulut, M. K. (2024b). Patient Doctor Q&A TR 5695. Erişim adresi: https://huggingface.co/datasets/kayrab/patient-doctor-qa-tr-5695.
  • Bulut, M. K. (2024c). Patient Doctor Q&A TR 95588. Erişim adresi: https://huggingface.co/datasets/kayrab/patient-doctor-qa-tr-95588.
  • Bulut, M. K. (2024d). Patient Doctor Q&A TR 19583. Erişim adresi: https://huggingface.co/datasets/kayrab/patient-doctor-qa-tr-19583.
  • Bulut, M. K. (2024e). Patient Doctor Q&A TR 167732. Erişim adresi: https://huggingface.co/datasets/kayrab/patient-doctor-qa-tr-167732.
  • Bulut, M. K., & Diri, B. (2024f). Artificial Intelligence Revolution in Turkish Health Consultancy: Development of LLM-Based Virtual Doctor Assistants. In 2024 8th International Artificial Intelligence and Data Processing Symposium (IDAP) (pp. 1–6). IEEE.
  • Chikhaoui, E., Alajmi, A., & Larabi-Marie-Sainte, S. (2022). Artificial intelligence applications in healthcare sector: ethical and legal challenges. Emerging Science Journal, 6(4), 717–738.
  • Chiang, W.-L., Li, Z., Lin, Z., Sheng, Y., Wu, Z., Zhang, H., Zheng, L., Zhuang, S., Zhuang, Y., & Zhou, D. (2024). Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference. arXiv preprint arXiv:2403.04132 [cs.AI].
  • Chen, Y., Nayman, N., Greenfeld, D., Gal, Y., & Berant, J. (2022). Towards learning universal hyperparameter optimizers with transformers. Advances in Neural Information Processing Systems, 35, 32053–32068.
  • Dettmers, T., Lewis, M., Shleifer, S., & Zettlemoyer, L. (2021). 8-bit optimizers via block-wise quantization. arXiv preprint arXiv:2110.02861.
  • Devlin, J. (2018). BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
  • Elo, A. E., & Sloan, S. (1978). The rating of chessplayers: Past and present. New York: Arco Pub.
  • Fan, Z., Tang, J., Chen, W., Wang, S., Wei, Z., Xi, J., ... & Zhou, J. (2024). AI Hospital: Benchmarking large language models in a multi-agent medical interaction simulator. arXiv preprint arXiv:2402.09742.
  • Google. (2024a). Gemini: Google’s AI Model for Multimodal Understanding. Erişim adresi: https://deepmind.google/technologies/gemini/pro/ (Erişim tarihi 16/08/2024).
  • Google. (2024b). Google Colab. Erişim adresi: https://colab.google/ (Erişim tarihi 08/09/2024).
  • Güneş, Y. C., & Ülkir, M. (2024). Comparative Performance Evaluation of Multimodal Large Language Models, Radiologist, and Anatomist in Visual Neuroanatomy Questions. Uludağ Üniversitesi Tıp Fakültesi Dergisi, 50(3), 551-556.
  • Henry41. (2024). iCliniq Medical QA Dataset. Erişim adresi: https://www.kaggle.com/datasets/henry41148/icliniq-medical-qa.
  • Hermansyah, I. D. (2024). Doctor-ID-QA Dataset. Erişim adresi: https://huggingface.co/datasets/hermanshid/doctor-id-qa.
  • Hoffmann, J., Borgeaud, S., Mensch, A., Buchatskaya, E., Cai, T., Rutherford, E., de Las Casas, D., Hendricks, L. A., Welbl, J., Clark, A., Hennigan, T., Noland, E., Millican, K., van den Driessche, G., Damoc, B., Guy, A., Osindero, S., Simonyan, K., Elsen, E., Rae, J. W., Vinyals, O., & Sifre, L. (2022). Training compute-optimal large language models. arXiv preprint arXiv:2203.15556.
  • Kesgin, H. T., Yuce, M. K., Dogan, E., Uzun, M. E., Uz, A., Seyrek, H. E., Zeer, A., & Amasyali, M. F. (2024). Introducing cosmosGPT: Monolingual Training for Turkish Language Models.
  • Labrak, Y., Bazoge, A., Morin, E., Gourraud, P. A., Rouvier, M., & Dufour, R. (2024). Biomistral: A collection of open-source pretrained large language models for medical domains. arXiv preprint arXiv:2402.10373.
  • Li, J., Lai, Y., Li, W., Ren, J., Zhang, M., Kang, X., ... & Liu, Y. (2024). Agent hospital: A simulacrum of hospital with evolvable medical agents. arXiv preprint arXiv:2405.02957.
  • Matsumoto, M., & Nishimura, T. (1998). Mersenne twister: a 623-dimensionally equidistributed uniform pseudo-random number generator. ACM Transactions on Modeling and Computer Simulation (TOMACS), 8(1), 3-30.
  • Meta AI. (2024). LLaMA 3.1: Meta’s Next-Generation Large Language Model. Erişim adresi: https://huggingface.co/meta-llama/Meta-Llama-3.1-70B (Erişim tarihi 08/08/2024).
  • meta-llama. (2024). meta-llama/Meta-Llama-3-8B. Erişim adresi: https://huggingface.co/meta-llama/Meta-Llama-3-8B (Erişim tarihi 16/08/2024).
  • Microsoft. (2024). GitHub Copilot: AI-Powered Code Completion by Microsoft. Erişim adresi: https://copilot.microsoft.com/ (Erişim tarihi 16/08/2024).
  • NVIDIA. (2024). NVIDIA A100 Tensor Core GPU. Erişim adresi: https://www.nvidia.com/tr-tr/data-center/a100/ (Erişim tarihi 08/08/2024).
  • Oğul, İ. Ü., Soygazi, F., & Bostanoğlu, B. E. (2025). TurkMedNLI: a Turkish medical natural language inference dataset through large language model based translation. PeerJ Computer Science, 11, e2662.
  • OpenAI. (2024a). GPT-3.5 Turbo. Erişim adresi: https://platform.openai.com/docs/models/gpt-3-5-turbo (Erişim tarihi 15/07/2024).
  • OpenAI. (2024b). GPT-4o: OpenAI’s Language Model. Erişim adresi: https://openai.com/index/hello-gpt-4o/ (Erişim tarihi 16/08/2024).
  • OpenAI. (2024c). GPT-4: OpenAI’s Language Model. Erişim adresi: https://openai.com/index/gpt-4/ (Erişim tarihi 21/08/2024).
  • Park, C.-W., Seo, S. W., Kang, N., Ko, B., Choi, B. W., Park, C. M., Chang, D. K., Kim, H., Kim, H., Lee, H., Jang, J., Ye, J. C., Jeon, J. H., Seo, J. B., Kim, K. J., Jung, K.-H., Kim, N., Paek, S., Shin, S.-Y., ... Yoon, H.-J. (2020). Artificial intelligence in health care: Current applications and issues. Journal of Korean Medical Science, 35(42), e379. https://doi.org/10.3346/jkms.2020.35.e379
  • Peng, Y., Yan, S., & Lu, Z. (2019). Transfer learning in biomedical natural language processing: an evaluation of BERT and ELMo on ten benchmarking datasets. arXiv preprint arXiv:1906.05474.
  • Sambanovasystems. (2024). sambanovasystems/SambaLingo-Turkish-Chat. Erişim adresi: https://huggingface.co/sambanovasystems/SambaLingo-Turkish-Chat (Erişim tarihi 16/08/2024).
  • Singhal, K., Tu, T., Gottweis, J., Sayres, R., Wulczyn, E., Amin, M., ... & Natarajan, V. (2025). Toward expert-level medical question answering with large language models. Nature Medicine, 1-8.
  • Trendyol. (2024). Trendyol/Trendyol-LLM-7b-chat-v1.8. Erişim adresi: https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-v1.8 (Erişim tarihi 16/08/2024).
  • Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., Bhosale, S., Bikel, D., Blecher, L., Canton Ferrer, C., Chen, M., Cucurull, G., Esiobu, D., Fernandes, J., Fu, J., Fu, W., Fuller, B., Gao, C., Goswami, V., Goyal, N., Hartshorn, A., Hosseini, S., Hou, R., Inan, H., Kardas, M., Kerkez, V., Khabsa, M., Kloumann, I., Korenev, A., Koura, P. S., Lachaux, M.-A., Lavril, T., Lee, J., Liskovich, D., Lu, Y., Mao, Y., Martinet, X., Mihaylov, T., Mishra, P., Molybog, I., Nie, Y., Poulton, A., Reizenstein, J., Rungta, R., Saladi, K., Schelten, A., Silva, R., Smith, E. M., Subramanian, R., Tan, X. E., Tang, B., Taylor, R., Williams, A., Kuan, J. X., Xu, P., Yan, Z., Zarov, I., Zhang, Y., Fan, A., Kambadur, M., Narang, S., Rodriguez, A., Stojnic, R., Edunov, S., & Scialom, T. (2023). Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
  • Ucar, A., Nayak, S., Roy, A., Taşcı, B., & Taşcı, G. (2025). A Comprehensive Study on Fine-Tuning Large Language Models for Medical Question Answering Using Classification Models and Comparative Analysis. arXiv preprint arXiv:2501.17190.
  • Unsloth. (2024). Unsloth: Finetune Llama 3.1, Mistral, Phi & Gemma LLMs 2–5x faster with 80% less memory. Erişim adresi: https://github.com/unslothai/unsloth (Erişim tarihi 08/08/2024).
  • Vaswani, A. (2017). Attention is all you need. Advances in Neural Information Processing Systems.
  • Wu, S., & Sun, M. (2022). Exploring the efficacy of pre-trained checkpoints in text-to-music generation task. arXiv preprint arXiv:2211.11216.
  • Yıldız, M. S., & Alper, A. (2023). Can ChatGPT-4 diagnose in Turkish: a comparison of ChatGPT responses to health-related questions in English and Turkish. Journal of Consumer Health on the Internet, 27(3), 294-307.
  • ytu-ce-cosmos. (2024). ytu-ce-cosmos/Turkish-Llama-8b-v0.1. Erişim adresi: https://huggingface.co/ytu-ce-cosmos/Turkish-Llama-8b-v0.1 (Erişim tarihi 16/08/2024).
  • Zhang, T., Kishore, V., Wu, F., Weinberger, K. Q., & Artzi, Y. (2019). BERTScore: Evaluating text generation with BERT. arXiv preprint arXiv:1904.09675.
There are 50 citations in total.

Details

Primary Language Turkish
Subjects Natural Language Processing
Journal Section Computer Engineering
Authors

Muhammed Kayra Bulut 0009-0000-3107-7121

Banu Diri 0000-0002-6652-4339

Publication Date June 3, 2025
Submission Date January 5, 2025
Acceptance Date March 23, 2025
Published in Issue Year 2025Volume: 28 Issue: 2

Cite

APA Bulut, M. K., & Diri, B. (2025). TÜRKÇE SAĞLIK DANIŞMANLIĞINDA BÜYÜK DİL MODELLERİNİN HASTA-DOKTOR İLETİŞİMİNDE KULLANIM POTANSİYELİ. Kahramanmaraş Sütçü İmam Üniversitesi Mühendislik Bilimleri Dergisi, 28(2), 802-822.