Woo Suk (Paul) Choi

Ph.D candidate at Seoul National University

About Me

A results-oriented AI Researcher currently pursuing an integrated MS/PhD at Seoul National University, building on a Bachelor’s in Computer Engineering from the State University of New York (SUNY) at Buffalo. My background includes two years of hands-on research at the Korea Electronics Technology Institute (KETI), where I developed Deep Learning and Reinforcement Learning models for automated IoT systems and co-engineered an environmental sensor device for occupancy detection. At SNU, I have successfully managed high-impact, industry-sponsored projects. As the Project Manager for a collaboration with Samsung Electronics (“Samsung Beyond Limit”project), I led the development of an AI-based analysis system for semiconductor manufacturing processes. Subsequently, as Deputy Project Manager, I developed a “Learning by Asking” (LBA) agent for active knowledge acquisition.

Education

Seoul National University

Ph.D. Candidate, Interdisciplinary Program in Neurosscience

2018 - Present

Bridging perception and cognition — toward intelligent systems that learn and reason like humans.

▷ Conducting research on structured knowledge representation learning for human-level video understanding.

▷ Developing models for vision–language interaction using scene graphs, knowledge graphs, and multimodal large language models.

▷ Managing and contributing to national AI projects including “Learning by Asking (LBA)” and “SW StarLab” funded by IITP.

▷ Organizer of the IEEE ROMAN 2023 workshop on Learning by Asking for Intelligent Robots and Agents (LA4IRA, https://la4ira.github.io).

University at Buffalo (State University of New York, Buffalo)

Bachelor's degree, Computer Engineering

2009 - 2016

Where my passion for artificial intelligence and computer systems first began to take shape.

▷ Graduated with Dean’s List Honors (Spring 2014, Fall 2014).

▷ Built a strong foundation in computer systems, algorithms, and machine learning principles.

▷ Gained early experience in collaborative software projects and research-driven problem solving.

Work experience

Korea Electronics Technology Institute (KETI)

www.keti.re.kr

Research Assistant

October 2016 - July 2018

Conducted AI research in department of Energy IT Convergence Research Center.

At KETI, I conducted research on Artificial Intelligence, focusing on deep learning and reinforcement learning for automated IoT systems. I developed predictive and control models that enhanced energy efficiency and automation performance. Additionally, I co-engineered an IoT environmental sensing device integrating CO2, PIR, and temperature sensors for real-time occupancy detection.

Third Republic of Korea Army (TROKA)

Administrative Clerk & Interpreter

February 2012 - November 2013

Served in the personnel administration department and supported joint exercises with USFK.

As an interpreter, I collaborated with the United States Forces Korea (USFK) during large-scale joint military exercises, including Ulchi Freedom Guardian (UFG) and Key Resolve/Foal Eagle (KR/FE). As an administrative clerk, I managed personnel documents and reports using Microsoft Excel and Word, developing strong organizational and communication skills.

Academic experience

AI Expert Program in Samsung Electronics

May 2025 - June 2025

Teaching assistant for the Samsung AI Expert Program at Seoul National University.

Assisted the professors in teaching the courses and conducting practical sessions for students.

Journal/Conference Reviewer

Jan 2020 - Present

Served as a reviewer for the journal/conference papers in the field of AI.

Reviewed the journal/conference such as CVPR, AAAI, EMNLP, etc. for the fields of AI.

Neuroscience Seminar 1 and 2

September 2023 - December 2024

Teaching assistant for the Neuroscience Seminar 1 and 2 courses at Seoul National University.

Assisted the professors in teaching the courses and grading the assignments.

Projects

Learning by Asking (LBA)

https://la4ira.github.io

IITP-funded national AI project (2022.01 – Present)

Developed an agent capable of acquiring knowledge through interactive questioning. Designed a question generation framework based on multimodal representations and hierarchical knowledge graphs. Organized the 1st Workshop on Learning by Asking for Intelligent Robots and Agents (IEEE RO-MAN 2023).

SW StarLab Project

IITP-funded AI research project (2021.01 – 2022.12)

Conducted research on cognitive reasoning for video understanding and question answering. Developed structured learning pipelines integrating vision-language representations and knowledge graphs. Focused on structured representations such as scene graphs and knowledge graphs for situational reasoning.

IITP-funded AI research project (2018.09 – 2021.12)

Participated in the development of human-level video understanding intelligence. Designed evaluation protocols and datasets for assessing multimodal comprehension. Also served as an organizer for the Video Turing Test competition.

Samsung Beyond Limit (BL)

Industry-academia collaboration project with Samsung Electronics (2019.01 – 2021.12)

Led the development of an AI-based analysis system for semiconductor manufacturing processes. Designed a reinforcement learning (RL)-based model for process optimization and predictive analytics. Managed data pipeline construction and multi-modal uncertainty modeling.

Digital Alarm Clock

Course project for CSE 341 (Computer Organization) at the University at Buffalo.

Designed and implemented a digital alarm clock using logic gates and microcontroller-based timing circuits.

Commencement System Firmware

Capstone project for CSE 453 (HW/SW Integrated System Design) at the University at Buffalo.

Developed firmware for the UB Commencement event scanning system integrating hardware-software communication modules.

Publications

INQUIRER: Harnessing Internal Knowledge Graphs for Video Question Generation

https://www.sciencedirect.com/science/article/pii/S0950705125010780

Published in Knowledge-Based Systems (2025).
Proposed INQUIRER, a framework that leverages internal knowledge graphs to generate context-aware questions from long-form videos.
The method enhances reasoning and diversity in video question generation by explicitly modeling structural semantics.

Video Turing Test: A First Step Towards Human-Level AI

https://onlinelibrary.wiley.com/doi/full/10.1002/aaai.12128

Published in AI Magazine (2023).
Introduces the Video Turing Test (VTT) initiative for benchmarking human-level video understanding.
Explores metrics and methodologies to evaluate an AI system’s comprehension of narrative-driven videos.

Scene Graph Parsing via Abstract Meaning Representation in Pre-trained Language Models

https://aclanthology.org/2022.dlg4nlp-1.4/

Presented at NAACL Workshop on Deep Learning on Graphs for NLP (2022).
Proposed an AMR-based scene graph parser that integrates semantic abstraction from pre-trained language models, improving visual-linguistic alignment.

SGRAM: Improving Scene Graph Parsing via Abstract Meaning Representation

https://arxiv.org/abs/2210.08675

ArXiv preprint (2022).
Extended AMR-guided parsing for richer scene graph construction, bridging conceptual semantics with visual grounding.

Language-agnostic Semantic Consistent Text-to-Image Generation

https://aclanthology.org/2022.mml-1.1/

Presented at ACL Multilingual Multimodal Workshop (2022).
Proposed a multilingual generation model that ensures semantic consistency between text and image representations across diverse languages.

Hypergraph Transformer: Weakly-Supervised Multi-hop Reasoning for Knowledge-based Visual Question Answering

https://aclanthology.org/2022.acl-long.29/

Published in ACL 2022.
Introduced the Hypergraph Transformer, enabling multi-hop reasoning over knowledge bases for visual question answering tasks.

Toward a Human-Level Video Understanding Intelligence

https://arxiv.org/abs/2110.04203

Presented at AAAI Fall Symposium Series on AI-HRI (2021).
Proposed a cognitive framework and evaluation methodology for achieving human-level understanding in complex video narratives.

Toward General Scene Graph: Integration of Visual Semantic Knowledge with Entity Synset Alignment

https://aclanthology.org/2020.alvr-1.2

Presented at ACL Workshop on Advances in Language and Vision Research (2020).
Integrated visual semantic knowledge and WordNet synsets to improve generalization in scene graph construction.

Web of Things based IoT Standard Interworking Test Case: Demo Abstract

https://dl.acm.org/doi/10.1145/3276774.3281012

Presented at BuildSys 2018.
Introduced standard test cases for IoT interworking using the Web of Things (WoT) framework.

Occupancy Detection Technology in the Building based on IoT Environment Sensors

https://dl.acm.org/doi/pdf/10.1145/3277593.3277633

Presented at IOT 2018.
Developed a low-cost occupancy detection system using IoT environmental sensors (CO₂, PIR, temperature) for smart energy management.

Space Inference System for Buildings using IoT

https://dl.acm.org/doi/10.1145/3162957.3163023

Presented at ICCIP 2017.
Designed a space inference system leveraging IoT-based sensor networks to monitor occupancy and optimize energy usage.

Intelligent Building using Hybrid Inference with Building Automation System to Improve Energy Efficiency

https://ceur-ws.org/Vol-1930/paper-2.pdf

Presented at SWIT@ISWC 2017.
Proposed a hybrid inference framework combining machine learning and semantic reasoning to improve building energy efficiency.

A Little More About Me

Outside of research, I enjoy spending time on things that keep me energized and inspired:

  • Gaming (MMORPG(LostArk, Diablo II and IV), League of Legends, etc.)
  • Working out and staying active through weight training, basketball, and swimming
  • Watching sports like soccer(football), basketball, and baseball
  • Listening to music and discovering new artists