Publications

* indicates corresponding author(s)

Submitted & Preprints

[4] Jeongmo Kim, Yisak Park, Minung Kim, Seungyul Han*, “Enhancing Generalization in Meta-Reinforcement Learning via Task-Preserving Representations.” (submitted)

[3] Seongmin Kim, Seungyul Han*, Woojun Kim, Jeewon Jeon, Youngchul Sung , “Off-Policy Multi-Agent Policy Optimization with Counterfactual Advantage Estimation.” (submitted)

[2] Giseung Park, Whiyoung Jung, Seungyul Han, Sungho Choi and Youngchul Sung*, “Adaptive multi-model fusion learning for sparse-reward reinforcement learning.” (submitted)

[1] Sohee Bae, Seungyul Han, and Youngchul Sung*, “A reinforcement learning formulation of the Lyapunov optimization: Application to edge computing systems with queue stability.” (submitted)

Top Machine Learning Conferences

[C.9] Junghyuk Yeom, Yonghyeon Jo, Jeongmo Kim, Sanghyeon Lee, Seungyul Han*, “Exclusively Penalized Q-learning for Offline Reinforcement Learning,” accepted to NeurIPS 2024 (spotlight paper)

[C.8] Yonghyeon Jo, Sunwoo Lee, Junghyuk Yum, Seungyul Han*, “FoX: Formation-aware exploration in multi-agent reinforcement learning,” Proceedings of the AAAI conference on artificial intelligence (AAAI), Vancouver, Canada, Feb. 2024.

[C.7] Sungho Choi, Seungyul Han*, Woojun Kim, Jongseong Chae, Whiyoung Jung, Youngchul Sung, “Domain Adaptive Imitation Learning with Visual Observtation,” the 37th Conference on Neural Information Processing Systems (NeurIPS) 2023, New Orleans, LA,USA, Dec. 2023.

[C.6] Seongmin Kim, Woojun Kim, Jeewon Jeon, Youngchul Sung and Seungyul Han, “Off-policy multi-agent policy optimization with multi-step counterfactual advantage estimation,” Adaptive and Learning Agents (ALA) Workshop at AAMAS 2023London, UK, May. 2023.

[C.5] Jongseong Chae, Seungyul Han*, Whiyoung Jung, Myungsik Cho, Sungho Choi, and Youngchul Sung, “Robust imitation learning against variations in environment dynamics,”  the 39th International Conference on Machine Learning (ICML), Jul. 2022.

[C.4] Seungyul Han and Youngchul Sung*, “A max-min entropy framework for reinforcement learning,” the 35th Conference on Neural Information Processing Systems (NeurIPS) 2021, Dec. 2021.

[C.3] Seungyul Han and Youngchul Sung*, “Diversity actor-critic: Sample-aware entropy regularization for sample-efficient exploration,” the 38th International Conference on Machine Learning (ICML) 2021, Jul. 2021.

[C.2] Seungyul Han and Youngchul Sung*, “AMBER: Adaptive multi-batch experience replay for continuous action control,” IJCAI Workshop on Scaling Up Reinforcement Learning, Macao, China, Aug. 2019.

[C.1] Seungyul Han and Youngchul Sung*, “Dimension-wise importance sampling weight clipping for sample-efficient reinforcement learning,” the 36th International Conference on Machine Learning (ICML) 2019, Long Beach, CA, USA, Jun. 2019.

Journals

[J.1] Seungyul Han, Youngchul Sung*, and Yong H. Lee, “Filter design for generalized frequency-division multiplexing,” IEEE Transactions on Signal Processing, vol. 65, no. 7, pp. 1644 – 1659, Apr. 2017.

Patents

[P.6] Seungyul Han and Yonghyeon Jo, “다중 에이전트 기반의 강화학습 시스템 및 강화학습 시스템 의 동작 방법,” application number: 10-2024-0059599.

[P.5] Youngchul Sung, Sungho Choi, Woojun Kim, Jongseong Chae, Whiyoung Jung, Seungyul Han, “시각적 관찰데이터를 활용한 도메인 적응형 모방학습 방법 및 시스템,” application number: 10-2023-0021570.

[P.4] Youngchul Sung, Jongseong Chae, Whiyoung Jung, Myungsik Cho, Sungho Choi, Seungyul Han, “환경역학 변화에 강인한 모방학습 방법 및 시스템,” application number: 10-2023-0021569.

[P.3] Seungyul Han and Youngchul Sung, “샘플 효율적인 탐색을 위한 샘플-인지 엔트로피 정규화 기법,” registration number: 10-2558092.

[P.2] Seungyul Han and Youngchul Sung, “연속 행동 공간 제어를 위한 적응형 다중-배치 경험 리플레이 기법,” registration number: 10-2103644.

[P.1] Youngchul Sung and Seungyul Han, “GFDM 시스템 기반의 필터 파형 설계 방법,” registration number: 10-1837609.