-
- [Nov 11, 2025] Professor Lee Seon-jae's AutoPhone Team Wins the 2025 Artificial Intelligence (AI) Championship NEW
- The Ministry of Science and ICT (MSIT; Deputy Prime Minister and Minister of Science and ICT Bae Kyung-hoon; hereinafter referred to as MSIT) held the final judging for the "2025 Artificial Intelligence (AI) Champion Competition" (hereinafter referred to as the AI Champion Competition) at the Dragon City Hotel in Yongsan, Seoul on the 5th. Five research teams were selected as winners and an awards ceremony was held. Team "AutoPhone," including Professor Lee Seon-jae, won the 2025 AI Champion Competition and was selected for a national R&D project worth up to 3 billion won. Technology Introduction: "The winning entry, 'FluidGPT,' is a mobile AI agent technology based on MobileGPT, a technology researched by Assistant Professor Lee Seon-jae during his doctoral studies. It is an autonomous AI agent that recognizes the user's voice commands, launches the app, and completes the process of clicking, entering, and making payments. For example, if a user says, "Book an SRT ticket from Seoul Station to Busan" or "Call a taxi," the AI actually opens the app and performs the necessary operations step by step. This goes beyond a simple voice assistant and embodies the concept of "behavioral AI (Agentic AI)," creating a fully autonomous system in which the AI directly recognizes the screen, makes decisions, and acts on its own. The core of FluidGPT is its non-invasive (API-free) architecture. While existing AI services required access to the API (Application Programming Interface) within the app to execute functions, this technology directly recognizes and manipulates the screen (UI) without modifying or linking the app code, acting as if a person were using the smartphone. This approach is similar to AI possessing "hand-moving intelligence," breaking the paradigm of existing smartphone usage. It is evaluated as a technology that will change the world." Reference: https://www.aitimes.kr/news/articleView.html?idxno=37080
-
- 작성일 2025-11-11
- 조회수 59
-
- [Nov 7, 2025] Department of CSE wins Grand Prize in 2 out of 3 tracks at the 2025 Samsung AI Challenge NEW
- At the 2025 Samsung AI Challenge, organized by the SAMSUNG DS AI Center, two students from the Department of Computer Science and Engineering at Sungkyunkwan University won the Grand Prize (1st place) in two out of three tracks (AI Co-Scientist / Large Model Compression). Jehyeon Park (Department of Computer Science and Engineering, Class of 2020) participated individually in the Large Model Compression track, which focuses on reducing model size without performance loss, and won the Grand Prize. Jihwan Byeon (Department of Computer Science and Engineering, Class of 2021) won the Grand Prize in the AI Co-Scientist track with the support and guidance of Professor Seonjae Lee’s laboratory. In the Large Model Compression track, a methodology was developed to address the memory and resource issues of SMoE models while maintaining performance and efficiently reducing the number of experts. In the AI Co-Scientist track, an algorithm was developed that designs and coordinates multiple AI agents to automate the entire model development process for solving 3D Metrology problems—including code generation and experimentation—with minimal human intervention. Awardee Interview(Jehyeon Park) : https://dacon.io/forum/415288?page=1&dtype=tag&fType=&category=forum Awardee Interview(Jiwhan Byeon) : https://dacon.io/forum/415286?page=1&dtype=tag&fType=&category=forum
-
- 작성일 2025-11-07
- 조회수 59
-
- [Nov 7, 2025] Professor Hyoungshick Kim’s seclab, wins Best Poster Award at ACM CCS 2025 NEW
- The paper "Poster: Scalable Privacy-Preserving Linear Regression Training via Homomorphic Encryption", conducted by undergraduate student Yena Cho and Professor Hyoungshick Kim from the Security Engineering Laboratory (Advisor: Hyoungshick Kim, https://seclab.skku.edu), received the Best Poster Award at the ACM Conference on Computer and Communications Security (CCS 2025), one of the most prestigious conferences in the field of security. (Selected 2 out of 41 posters, approximately 4.9%.) This research proposes a novel protocol that enables efficient linear regression training within an encrypted data environment. The team developed a CKKS-based PP-LR (Privacy-Preserving Linear Regression) protocol to address the high computational cost issue inherent in traditional homomorphic encryption-based training methods. PP-LR efficiently performs gradient descent on encrypted data through feature-level parallelization and a conditional bootstrapping technique. As a result, it achieves up to 15.7× faster training speed compared to existing homomorphic encryption implementations while maintaining an accuracy gap within 0.2% of plaintext training models.
-
- 작성일 2025-11-07
- 조회수 76
-
- [Nov 5, 2025] School of Global Convergence to participate in the 2025 Gwangyang International Media Art Festival Campus
-
School of Global Convergence to participate in the 2025 Gwangyang International Media Art Festival Campus Program. The Culture and Technology Convergence major in the School of Global Convergence at our university (Program Chair: Professor Changjun Lee) participated in the 2025 Gwangyang International Media Art Festival (GMAF), held from October 22 to November 4, 2025, presenting seven student-created works. This exhibition, part of the Gwangyang International Media Art Campus Program, offered a focused showcase of our CT students’ creative experiments and international collaboration, centered on the fusion of technology and art. Alternative (short version): Our university’s School of Global Convergence (CT major, led by Professor Changjun Lee) joined the 2025 GMAF (Oct 22–Nov 4), presenting seven student works as part of the Campus Program highlighting technology–art convergence and international collaboration. The Culture and Technology Convergence major leverages advanced technologies—such as artificial intelligence (AI), interactive media, and XR—as tools for cultural and artistic creation, cultivating convergence talents who combine creativity with technical competence in a rapidly changing digital environment. At this festival, works presented through a joint workshop with Kunstuniversität Linz at Ars Electronica 2025, invited pieces from Kunstuni Linz, as well as works by CT major students who participated in Ars Electronica 2024 and students from the Department of Immersive Media Engineering were exhibited together. The exhibition demonstrated the potential of a next-generation media art education model in which technology and art are seamlessly integrated. A variety of collaborative works by students from our university and Kunstuniversität Linz were showcased. The exhibited works were: Expansion in Disorder: <침묵합창단> — 성상훈, 하지수 Not A Monster: <센소리움> — 정효빈, 변서윤 오케스트라이즈: — 이유진, 강윤경, 김예서 Lake Watchers 2024 — Hess Jeon and four others (Kunstuni Linz) The Path of Experience — Jeenie Kim (Kunstuni Linz) Cycle and Connection: — 박서영 Shake It, Light It:
— 김유정, 이수연 This project was led by Professor Seol Sanghoon (RISE Program Industry–Academic Professor), who served as Campus General Director, with overall direction by Professor Changjun Lee (Head of the Culture and Technology Convergence major). -
- 작성일 2025-11-05
- 조회수 55
-
- [Nov 3, 2025] Professor Youngjoong Ko's NLP Laboratory, two papers accepted for EMNLP 2025 Main Track (Long Paper)
- Two papers from the Natural Language Processing Laboratory (NLP Lab, supervised by Professor Youngjoong Ko) have been accepted for publication in the Main Track (Long Paper) of EMNLP 2025 (The 2025 Conference on Empirical Methods in Natural Language Processing), a top-tier international conference in the field of artificial intelligence and natural language processing. Title: ECO Decoding: Entropy-Based Control for Controllability and Fluency in Controllable Dialogue Generation, Main Track (long paper) (Seungmin Shin, Master’s student in the Dept. of AI, and Dooyoung Kim, Ph.D. student in the Dept. of AI) Abstract: Controllable Dialogue Generation (CDG) enables chatbots to generate responses with desired attributes, and weighted decoding methods have achieved significant success in the CDG task. However, using a fixed constant value to manage the bias of attribute probabilities makes it challenging to find an ideal control strength that satisfies both controllability and fluency. To address this issue, we propose ECO decoding (Entropy-based COntrol), which dynamically adjusts the control strength at each generation step according to the model’s entropy in both the language model and attribute classifier probability distributions. Experiments on the DailyDialog and MultiWOZ datasets demonstrate that ECO decoding consistently improves controllability while maintaining fluency and grammaticality, outperforming prior decoding methods across various models and settings. Furthermore, ECO decoding alleviates probability interpolation issues in multiattribute generation and consequently demonstrates strong performance in both single- and multi-attribute scenarios. Title: Decoding Dense Embeddings: Sparse Autoencoders for Interpreting and Discretizing Dense Retrieval, Main Track (long paper) (Sungwan Park, Master’s student in the Dept. of AI, and Taeklim Kim, Master’s student in the Dept. of AI) Abstract: Despite their strong performance, Dense Passage Retrieval (DPR) models suffer from a lack of interpretability. In this work, we propose a novel interpretability framework that leverages Sparse Autoencoders (SAEs) to decompose previously uninterpretable dense embeddings from DPR models into distinct, interpretable latent concepts. We generate natural language descriptions for each latent concept, enabling human interpretations of both the dense embeddings and the query-document similarity scores of DPR models. We further introduce Concept-Level Sparse Retrieval (CL-SR), a retrieval framework that directly utilizes the extracted latent concepts as indexing units. CL-SR effectively combines the semantic expressiveness of dense embeddings with the transparency and efficiency of sparse representations. We show that CL-SR achieves high computational and storage efficiency while maintaining robust performance across vocabulary and semantic mismatches. | Professor Youngjoong Ko | yjko@skku.edu, nlp.skku.edu | NLP Lab | nlplab.skku.edu
-
- 작성일 2025-11-03
- 조회수 87
-
- [Oct 31, 2025] Eunil Park’s research team from the AI Convergence Major wins at ICCV 2025 - ABAW VA Estimation Challenge
- Eunil Park’s research team from the AI Convergence Major wins at ICCV 2025 - ABAW VA Estimation Challenge ▲(From left) Yu-bin Lee, Ph.D. candidate at Sungkyunkwan University; Sang-eun Lee, graduate (currently researcher at ETRI); Chae-won Park, M.S. candidate; Jun-yeop Cha, Ph.D. candidate; and Professor Eunil Park The research team led by Professor Eunil Park from the AI Convergence Major announced that they won first place in the ABAW (Affective Behavior Analysis in the Wild) / Valence-Arousal Estimation Challenge, held as part of ICCV 2025 (International Conference on Computer Vision), one of the world’s most prestigious conferences in artificial intelligence (computer vision). The competition took place in October 2025 in Hawaii, USA, and brought together leading universities and research institutes from around the world to compete in emotion state prediction technologies using unstructured multimodal data such as video and audio. ▲ Certificate awarded for achieving first place in the ICCV 2025 - ABAW Valence-Arousal Estimation Challenge ▲ Yu-bin Lee, Ph.D. candidate, presenting the first-place winning research at ICCV 2025 (October 20, 2025, Honolulu Convention Center, Hawaii, USA) The ABAW Challenge evaluates technologies that precisely estimate human emotions along the Valence-Arousal (positive–negative, activation–deactivation) dimensions using complex multimodal data collected from real-world environments. In particular, this year’s competition required a sophisticated understanding of temporal dynamics and multimodal fusion, establishing itself as a key benchmark in the fields of real-time emotion estimation and human–AI interaction. Professor Eunil Park’s research team achieved outstanding results by proposing an emotion recognition framework based on Time-aware Gated Fusion (TAGF). The proposed model employs a BiLSTM gating mechanism to dynamically reflect emotional changes over time, suppress unnecessary noise, and emphasize key emotional cues, thereby achieving higher prediction accuracy compared to existing models. These results demonstrate that stable and interpretable emotion recognition is possible even in real-world environments and are expected to expand into various application areas such as human–AI interaction, emotion-based content analysis, and the development of emotionally intelligent agents. ▲ Schematic diagram of the Time-aware Gated Fusion (TAGF)-based emotion prediction framework integrating visual and audio information This achievement is regarded as another global recognition of Professor Eunil Park’s research team and their long-standing expertise in developing user-understanding-based general-purpose artificial intelligence technologies. The team plans to focus on advancing next-generation emotionally intelligent AI technologies that go beyond emotion recognition to precisely interpret human cognitive contexts and intentions. In addition, this research was conducted as part of the Human-Centered Next-Generation Challenge-Oriented AI Technology Development Project and the Deepfake Research Center Project, supported by the Ministry of Science and ICT and the Institute of Information & Communications Technology Planning and Evaluation (IITP). The results were officially presented at ICCV 2025 ※ Title: Dynamic Temporal Gating Networks for Cross-Modal Valence-Arousal Estimation ※ Authors: 이유빈(제1저자), 이상은, 박채원, 차준엽(공동저자), 박은일(교신저자) ※ Conference: ICCV 2025 (International Conference on Computer Vision)
-
- 작성일 2025-10-31
- 조회수 96
-
- [Oct 27, 2025] Professor Sooyoung Cha 's Software Analysis Laboratory(SAL), a paper accepted for ICSE 2026
- A paper authored by Minjong Kim, a Ph.D. student from the Software Analysis Laboratory (advisor: Sooyoung Cha), has been accepted for publication at ICSE 2026 (IEEE/ACM International Conference on Software Engineering), one of the most prestigious conferences in the field of software engineering. The paper will be presented in April 2026 in Rio, Brazil. The paper, Enhancing Symbolic Execution with Self-Configuring Parameters, proposes a fully automated external parameter tuning technique to improve the performance of “Symbolic Execution,” a powerful software testing methodology. Practical symbolic execution tools widely used in academia and industry typically include dozens to hundreds of external parameters that significantly affect performance. However, existing parameter tuning approaches for symbolic execution have required manual adjustments or semi-automatic methods involving user intervention for each target software. In this work, the authors propose ParaSuit, a method that automatically selects appropriate external parameter values without any user intervention for two well-known symbolic execution tools, KLEE and CREST. Experimental results show that ParaSuit significantly improves branch coverage and bug detection capabilities compared to state-of-the-art parameter tuning techniques on a wide range of open-source C programs. [Paper Information] - Title: Enhancing Symbolic Execution with Self-Configuring Parameters - Authors: Minjong Kim, Sooyoung Cha - Conference: IEEE/ACM International Conference on Software Engineering (ICSE 2026) Abstract: We present ParaSuit, a self-configuring technique that enhances symbolic execution by autonomously adjusting its parameters tailored to each program under test. Modern symbolic execution tools are typically equipped with various external parameters to effectively test real-world programs. However, the need for users to fine-tune a multitude of parameters for optimal testing outcomes makes these tools harder to use and limits their potential benefits. Despite recent efforts to improve this tuning process, existing techniques are not self-configuring; they cannot dynamically identify which parameters to tune for each target program, and for each manually selected parameter, they sample a value from a fixed, user-defined set of candidate values that is specific to that parameter and remains unchanged across programs. The goal of this paper is to automatically configure symbolic execution parameters from scratch for each program. To this end, ParaSuit begins by automatically identifying all available parameters in the symbolic execution tool and evaluating each parameter’s impact through interactions with the tool. It then applies a specialized algorithm to iteratively select promising parameters, construct sampling spaces for each, and update their sampling probabilities based on data accumulated from symbolic execution runs using sampled parameter values. We implemented ParaSuit on KLEE and assessed it across 12 open-source C programs. The results demonstrate that ParaSuit significantly outperforms the state-of-the-art method without selfconfiguring parameters, achieving an average of 26% higher branch coverage. Remarkably, ParaSuit identified 11 unique bugs, four of which were exclusively discovered by ParaSuit. | Professor Sooyoung Cha | sooyoung.cha@skku.edu | Software Analysis Laboratory | sal.skku.edu/
-
- 작성일 2025-10-27
- 조회수 175
-
- [Oct 14, 2025] Professor Hyungjoon Koo’s SecAI Lab Paper Accepted to ACSAC ’25
- A paper by Minseok Kim (M.S. Program, Department of Software) from the SecAI Lab (Advisor: Prof. Hyungjoon Koo, https://secai.skku.edu) titled “Rescuing the Unpoisoned: Efficient Defense against Knowledge Corruption Attacks on RAG Systems” has been accepted for presentation at the Annual Computer Security Applications Conference (ACSAC ’25), one of the premier international conferences in the field of cybersecurity. The paper will be presented in December 2025. Retrieval-Augmented Generation (RAG) is an emerging technique designed to overcome the limitations of large language models (LLMs)—notably hallucination and lack of access to up-to-date information—by leveraging external knowledge bases. However, recent studies have shown that malicious actors can inject poisoned or misleading information into open knowledge sources such as Wikipedia, thereby corrupting RAG-based systems to produce false or manipulated responses. While existing defense approaches are effective, they often incur high computational overhead, requiring per-document verification or additional model training. To address these challenges, this research introduces RAGDefender, a defense mechanism that detects and filters malicious content without any additional model inference or retraining. The proposed system employs a two-stage filtering framework based on semantic similarity and TF-IDF lexical patterns. In the first stage, it estimates the number of potentially malicious documents using hierarchical clustering (for single-hop QA) or focus-level analysis (for multi-hop QA). In the second stage, it performs pairwise ranking using cosine similarity and frequency scoring to precisely identify poisoned content. Experimental results on NQ, HotpotQA, and MS MARCO datasets, across three attack types and six language models (LLaMA, Vicuna, GPT-4o, Gemini, etc.), show that RAGDefender reduces the attack success rate (ASR) from 0.89 to 0.02 even when malicious documents outnumber legitimate ones by a factor of four, while improving answer accuracy from 0.21 to 0.73. Notably, RAGDefender achieves 12.36× faster processing speed than prior methods while requiring no GPU memory, proving it to be a highly practical and easily integrable defense solution for a wide range of RAG frameworks and retrievers.
-
- 작성일 2025-10-14
- 조회수 274
-
- [Oct 10, 2025] Grand Prize at the 2025 Samsung AI Challenge
- Under the guidance and support of Professor Sunjae Lee’s Lab, Jihwan Byun (Department of Software) received the Grand Prize in the AI Co-Scientist Track at the 2025 Samsung AI Challenge, hosted by the Samsung DS AI Center. Congratulations on the outstanding achievement. Interview with the awardee: https://dacon.io/forum/415286
-
- 작성일 2025-10-14
- 조회수 273
-
- [Oct 13, 2025] Professor Simon Seong-Il Woo’s DASH Lab – Two Papers Accepted to NeurIPS 2025
- Two papers from the DASH Lab (Data Analytics, Security, and HCI Lab, Advisor: Prof. Simon Seong-Il Woo) have been accepted to NeurIPS 2025 (The Thirty-Ninth Annual Conference on Neural Information Processing Systems), one of the world’s leading conferences in artificial intelligence. The papers will be presented in December 2025 at the San Diego Convention Center, USA. 1. “Through the Lens: Benchmarking Deepfake Detectors Against Moiré-Induced Distortions” Authors: Razaib Tariq (Ph.D. Program, Department of Software, Co-First Author), Minji Heo (M.S. Graduate, Department of Artificial Intelligence, Co-First Author), and Shahroz Tariq (CSIRO, Data61) Corresponding Author: Prof. Simon Seong-Il Woo Track: Datasets and Benchmarks This study investigates how Moiré artifacts, which occur when filming digital screens with smartphones, can significantly degrade the performance of deepfake detection models. To evaluate this phenomenon, the researchers systematically assessed state-of-the-art deepfake detectors using videos containing Moiré distortions. A total of 12,832 videos (35.64 hours) were collected from Celeb-DF, DFD, DFDC, UADFV, and FF++ datasets, reflecting diverse real-world conditions such as varying display screens, smartphone models, lighting environments, and camera angles. To analyze the effect of Moiré patterns in greater depth, the team constructed the DeepMoiréFake (DMF) dataset and applied two synthetic Moiré generation methods for controlled experimentation. The results revealed that across 15 top detection models, Moiré artifacts reduced performance by up to 25.4%, and synthetic Moiré patterns decreased accuracy by 21.4%. Surprisingly, even demoiréing techniques, commonly used to mitigate distortions, further worsened detection accuracy by up to 16%. These findings highlight the urgent need for robust deepfake detection models capable of handling Moiré distortions alongside other realistic perturbations such as compression, sharpening, and blurring. By releasing the DMF benchmark dataset, the study provides a critical foundation for bridging the gap between controlled research settings and real-world deepfake detection scenarios. 2. “RUAGO: Effective and Practical Retain-Free Unlearning via Adversarial Attack and OOD Generator” Authors: Sangyong Lee (Ph.D. Program, Department of Software, First Author), Sangjun Jeong (M.S. Program, Department of Artificial Intelligence, Second Author) Corresponding Author: Prof. Simon Seong-Il Woo Track: Main Track This study addresses a key challenge in machine unlearning—how to make a model effectively forget specific data when access to retain data is not available. In general, the goal of unlearning is to remove the forget set while maintaining the model’s performance on the retain set. However, when retain data cannot be accessed, models often experience severe performance degradation. To overcome this limitation, the authors propose RUAGO (Retain-free Unlearning via Adversarial Attack and Generative Model using OOD Training), a novel and practical retain-free unlearning framework. RUAGO achieves stable and effective unlearning through three main components: Adversarial Probability Module (APM): Uses soft-label-based adversarial probabilities for forget data instead of one-hot labels to prevent excessive forgetting and ensure stable optimization. OOD-based Generative Preservation: Utilizes a generator trained on Out-of-Distribution (OOD) data to preserve the original model’s knowledge without requiring retain data. The synthetic data are refined through a model inversion process to align with the model’s internal representations. Sample Difficulty Scheduler: Applies a Curriculum Learning–based knowledge distillation approach that gradually progresses from easy to hard samples, improving both early-stage stability and late-stage generalization. Experiments conducted on CIFAR-10, CIFAR-100, TinyImageNet, and VGGFace2 datasets demonstrate that RUAGO significantly outperforms existing retain-free methods and achieves performance comparable to or even superior to state-of-the-art retain-based methods. In addition, results from Membership Inference Attack (MIA) evaluations show that RUAGO provides a privacy protection level similar to that of a fully retrained model, proving its effectiveness in achieving both high accuracy and strong privacy preservation. Prof. Simon Seong-Il Woo | swoo@g.skku.edu | https://dash-lab.github.io/
-
- 작성일 2025-10-14
- 조회수 278
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 다음 페이지로 이동하기
- 마지막 페이지로 이동하기



