publications

2025

  1. fairpro.png
    Aligned but Stereotypical? The Hidden Influence of System Prompts on Social Bias in LVLM-Based Text-to-Image Models
    NaHyeon* Park, Na Min An*, Kunhee Kim, Soyeon Yoon, Jiahao Huo, and Hyunjung Shim
    arXiv preprint arXiv:2512.04981, 2025
    * indicates equal contribution.
  2. culturemix.png
    World in a Frame: Understanding Culture Mixing as a New Challenge for Vision-Language Models
    Eunsu Kim*, Junyeong* Park, Na Min An*, Junseong Kim, Hitesh Laxmichand Patel, Jiho Jin, Julia Kruk, Amit Agarwal, Srikant Panda, Fenal Ashokbhai Ilasariya, Hyunjung Shim, and Alice Oh
    arXiv preprint arXiv:2511.22787, 2025
    * indicates equal contribution.
  3. multitap.png
    Multi-Objective Task-Aware Predictor for Image-Text Alignment
    Eunki Kim*, Na Min An*, James Thorne, and Hyunjung Shim
    arXiv preprint arXiv:2510.00766, 2025
    * indicates equal contribution.
  4. copatch.png
    CoPatch: Zero-Shot Referring Image Segmentation by Leveraging Untapped Spatial Knowledge in CLIP
    Na Min An, Inha Kang, Minhyun Lee, and Hyunjung Shim
    arXiv preprint arXiv:2509.23098, 2025
  5. eye4b.png
    How Blind and Low-Vision Individuals Prefer Large Vision-Language Model-Generated Scene Descriptions
    Na Min An*, Eunki Kim*, Wan Ju Kang, Sangryul Kim, James Thorne, and Hyunjung Shim
    arXiv preprint arXiv:2502.14883, 2025
    * indicates equal contribution.
  6. vpr.png
    VLM-Guided Visual Place Recognition for Planet-Scale Geo-Localization
    Sania Waheed, Na Min An, Michael Milford, Sarvapali D. Ramchurn, and Shoaib Ehsan
    ACRA, 2025
  7. hbop.png
    Image Embedding Sampling Method for Diverse Captioning
    Sania Waheed*, and Na Min An*
    EMNLP Main, 2025
    * indicates equal contribution.
  8. I0T.png
    I0T: Embedding Standardization Method Towards Zero Modality Gap
    Na Min An*, Eunki Kim*, James Thorne, and Hyunjung Shim
    In ACL (Outstanding; Top 1%, 26 out of  3,000 accepted papers), 2025
    * indicates equal contribution.
  9. framework.png
    Diffusion Models Through a Global Lens: Are They Culturally Inclusive?
    Zahra Bayramli*, Ayhan Suleymanzade*, Na Min An, Huzama Ahmad, Eunsu Kim, Junyeong Park, James Thorne, and Alice Oh
    In ACL (Oral; Top 8%, 243 out of  3,000 accepted papers), 2025
    * indicates equal contribution.
  10. sightation.png
    Sightation Counts: Leveraging Sighted User Feedback in Building a BLV-aligned Dataset of Diagram Descriptions
    Wan Ju Kang, Eunki Kim, Na Min An, Sangryul Kim, Haemin Choi, Ki Hoon Kwak, and James Thorne
    In ACL Main, 2025
  11. tom_kimchi.png
    WHEN TOM EATS KIMCHI: Evaluating Cultural Awareness of Multimodal Large Language Models in Cultural Mixture Contexts
    Jun Seong Kim*, Kyaw Ye Thu*, Javad Ismayilzada, Junyeong Park, Eunsu Kim, Huzama Ahmad, Na Min An, James Thorne, and Alice Oh
    In NAACL C3NLP Workshop (Outstanding Paper), 2025
    * indicates equal contribution.
  12. adv_sci.png
    Machine Learning Techniques for Simulating Human Psychophysical Testing of Low-Resolution Phosphene Face Images in Artificial Vision
    Na Min An, Hyeonhee Roh, Sein Kim, Jae Hun Kim, and Maesoon Im
    Advanced Science (Impact Factor 14.3, JCR Ranking 6.5%), 2025

2024

  1. tev_grad_var.png
    Stable Language Model Pre-training by Reducing Embedding Variability
    Woojin Chung, Jiwoo Hong, Na Min An, James Thorne, and Se-Young Yun
    In EMNLP Main, 2024
  2. pna.png
    Capturing the Relationship Between Sentence Triplets for LLM and Human-Generated Texts to Enhance Sentence Embeddings
    Na Min An, Sania Waheed, and James Thorne
    In EACL Findings, 2024

2023

  1. hum_vs_llm.png
    Can Large Language Models Capture Dissenting Human Voices?
    Noah Lee*, Na Min An*, and James Thorne
    In EMNLP Main, 2023
    * indicates equal contribution.
  2. rl.png
    Reinforcement Learning Framework to Simulate Short-Term Learning Effects of Human Psychophysical Experiments Assessing the Quality of Artificial Vision
    Na Min An, Hyeonhee Roh, Sein Kim, Jae Hun Kim, and Maesoon Im
    In IJCNN (Oral), 2023
  3. patent.png
    Artificial Vision Parameter Learning and Automating Method for Improving Visual Prosthetic Systems
    Maesoon Im, Hyeonhee Roh, Na Min An, and Jae Hun Kim
    2023
    US Patent App. 18/075,555

2021

  1. pca_vs_cnn.png
    Machine Learning Approaches as An Alternative to Human Psychophysical Tests of Prosthetic Vision (abstract)
    Na Min An, Hyeonhee Roh, Soomin Jung, Eun Ju Kim, and Maesoon Im
    EMBC Abstract, 2021