seil@kang:~

seil@kang:~$whoami

Seil KangPhD Student · Yonsei University · Seoul, Korea

seil@kang:~$cat /etc/motd

I am a Ph.D. student in Computer Science at Yonsei University.

# research focus

building better multimodal AI systems by rigorously understanding how large-scale transformers internally represent and process cross-modal information, and applying those insights to improve both model performance and downstream domain experts experience.

# currently

language models, vision-language models, model and agentic system interpretability.

seil@kang:~$opensource-contributor-of

  1. dadb6ddOpensmi(Feb. 2026 ~ Current)
    Zero-dependency GPU allocation enforcement across multiple SSH nodes, with an interactive TUI and policy-based violation detection.
  2. 002a8f0Opencode(Feb. 2026 ~ Current)
    Contributed a merged pull request to the opencode main branch.
  3. 971e2bfOpenclaw(Feb. 2026 ~ Current)
    Contributing to Openclaw since v2026.2.22, with ongoing contributions.

seil@kang:~$ls -la ~/publications/

total 12
  1. 01Pre-Print

    Physics in 2-Steps: Locking Motion Priors Before Visual Refinement Erases Them

    Woojung Han, Seil Kang, Youngjun Jun, Min-Hung Chen, Fu-En Yang, Seong Jae Hwang

    pdf

  2. 02Pre-Print

    Real-Time Visual Attribution Streaming in Thinking Model

    Seil Kang, Woojung Han, Junhyeok Kim, Jinyeong Kim, Youngeun Kim, Seong Jae Hwang

    pdf

  3. 03CVPR 2026[HIGHLIGHT]

    Interpretable Motion-Attentive Maps: Spatio-Temporally Localizing Concepts in Video Diffusion Transformers

    Youngjun Jeon, Seil Kang, Woojung Han, Seong Jae Hwang

    pdf

  4. 04CVPR 2026

    ViKey: Enhancing Temporal Understanding in Videos via Visual Prompting

    Yeonkyung Lee, Dayun Ju, Youngmin Kim, Seil Kang, Seong Jae Hwang

    pdf

  5. 05NeurIPS 2025

    Rare Text Semantics Were Always There in Your Diffusion Transformer

    Seil Kang*, Woojung Han*, Dayun Ju, Seong Jae Hwang

    pdf

  6. 06NeurIPS 2025 Mechanistic Interpretability Workshop[SPOTLIGHT <13%]

    Interpreting Attention Heads for Image-to-Text Information Flow in Large Vision-Language Models

    Jinyeong Kim, Seil Kang, Jiwoo Park, Junhyeok Kim, Seong Jae Hwang

    pdf

  7. 07Technical Report

    Neuron-Level Approach for Multi-Hop Reasoning in Large Vision-Language Models

    Seil Kang, Jinyeong Kim, Seong Jae Hwang

    pdf

  8. 08CVPR 2025[HIGHLIGHT <3%]

    Your Large Vision Language Model Only Needs A Few Attention Heads for Visual Grounding

    Seil Kang, Jinyeong Kim, Junhyeok Kim, Seong Jae Hwang

    pdf

  9. 09ICLR 2025

    See What You Are Told: Visual Attention Sink in Large Multimodal Models

    Seil Kang*, Jinyeong Kim*, Junhyeok Kim, Seong Jae Hwang

    pdf

  10. 10CVPRW 2025

    FALCON: Frequency Adjoint Link with CONtinuous Density Mask for Fast Single Image Dehazing

    Donghyun Kim, Seil Kang, Seong Jae Hwang

    pdf

  11. 11Technical Report

    WoLF: Wide-scope Large Language Model Framework for CXR Understanding

    Seil Kang, Donghyun Kim, Junhyeok Kim, Hyo Kyoung Lee, Seong Jae Hwang

    pdf

  12. 12PR 2024

    CoBra: Complementary Branch Fusing Class and Semantic Knowledge for Robust Weakly Supervised Semantic Segmentation

    Woojung Han, Seil Kang, Kyobin Choo, Seong Jae Hwang

    pdf