JeongHyeon Kim
• GitHub • Google Scholar • LinkedIn • Email

Hello! My name is Jeonghyeon Kim, and I am a Ph.D. student in Data Science at SeoulTech, advised by Prof. Sangheum Hwang of DAINTLAB.
🧗♂️I am dedicated to enhancing the reliability and interpretability of AI systems, particularly through:
- Out-of-distribution detection (OoDD)
- LLM unlearning
- Energy based models (EBMs)
⭐️Research Focus
My research explores OoDD in vision-language models (VLMs) like CLIP, leveraging multi-modal fine-tuning (MMFT). In our work, we have addressed the modality gap between image and text embeddings through cross-modal alignment (CMA), enhancing the utilization of pretrained knowledge. Moving forward, we aim to extend this research to multi-modal pre-training and integrate large language models.
I am also interested in LLM unlearning, with a particular focus on preserving utility performance while effectively unlearning target knowledge. I have developing strategies for dataset sampling methods that optimize the balance between forgetting targeted information and maintaining model utility.
💥Ultimate Goal
To develop AI systems that are reliable, and interpretable, ensuring their trustworthy deployment in real-world applications.
News
Feb 26, 2025 | “Enhanced OoD Detection through Cross-Modal Alignment of Multi-Modal Representations” has been accepted at CVPR 2025 !🔥 |
---|
Publications
- UnlearningUncovering Hidden Vulnerabilities in Machine Unlearning: Adversarial Attack as a Probe and Pruning as a SolutionIn Korea Computer Congress 2024, 2024
- VLM OoDDComparison of Out-of-Distribution Detection Performance of CLIP-based Fine-Tuning MethodsIn 2024 International Conference on Electronics, Information, and Communication (ICEIC), 2024
- VLM OoDDEnhancing Out-of-Distribution Detection Performance of CLIP Based on Fine-tuning with Random TextsIn Korea Computer Congress 2023, 2023
- Active Learning