-
- [Promotion] Professor Muhammad Khan of the Department of Global Convergence, selected as an honorary citizen of Seoul
- Professor Muhammad Khan of the Department of Global Convergence, selected as an honorary citizen of Seoul - By Utilizing of artificial intelligence and computer science technology, developed new technologies related to civil safety. The only foreign researcher studying in Korea of the world's top 1% researcher (HCR). Professor Muhammad Khan of the Department of Global Convergence has selected as an honorary citizen of Seoul on December 9 (Fri). At the honorary citizenship award ceremony, 18 foreigners from 16 countries, including Professor Muhammad Khan, were selected as "honorary citizens of Seoul." The honorary citizenship of foreigners in Seoul began in 1958 by awarding merit citizenship to foreigners who helped rebuild the city after the war. Currently, it is given to foreigners living in Seoul, foreign heads of state, and diplomatic envoys who have contributed to the development of Seoul. According to the Seoul Metropolitan Government, 895 people from 100 countries received honorary citizenship cards from Seoul as of November 30, 2022. Professor Muhammad Khan was selected as an honorary citizen in recognition of his contribution to improving the level of science and technology by developing new technologies related to civil safety, such as analyzing fire sites and monitoring abnormal situation videos using artificial intelligence and computer technology. In addition, Professor Muhammad Khan was selected as the "World's Most Influential Top 1% Researcher (HCR)" by Clarivate. He was the only foreign researcher in Korea to be selected as the world's top 1% researcher.
-
- 작성일 2023-01-11
- 조회수 926
-
- [Promotion] 2022 SKKU Graduate Student Won the Grand Prize for Papers
- ◦ Excellence Award in Humanities and Social Sciences: Lee Hae-in/Lee Sun-hong/Jeong Hae-sun, Department of Artificial Intelligence Convergence ◦ Excellence Award in Natural Science: Lee Bo-hyun, Department of Software ◦ Excellence Award in Natural Science: Kim Ji-hwan, Department of Artificial Intelligence ◦ Encouragement Award in the field of natural science: Na Chul-won, Department of Artificial Intelligence
-
- 작성일 2023-01-03
- 조회수 879
-
- [Research News] Professor Heo Jae-Pil's lab has been approved to publish a paper in ECCV 2022
- Two papers by the Visual Computing Laboratory (Guidance Professor: Jae-Pil Heo) have been approved for publication at the European Conference on Computer Vision 2022, a top-tier academic conference in the field of computer vision and artificial intelligence. Thesis #1: "Tailoring Self-Supervision for Superimposed Learning" (Master of Artificial Intelligence WonJun Moon, Ph.D. in Artificial Intelligence, Ji-Hwan Kim) Thesis #2: "Difficulty-Aware Simulator for Open Set Recognition" (Master of Artificial Intelligence WonJun Moon, Master of Artificial Intelligence Junho Park, Master of Artificial Intelligence Hyun-Seok Seong, Master of Artificial Intelligence, and Master of Artificial Intelligence Cheol-Ho Cho) In the "Tailing Self-Supervision for Supervisory Learning" paper, we first pointed out the problems that may arise when the Self-supervision Task has been applied additionally without any special changes in the Supervisory Learning environment. When the Self-supervision Task is applied as an assistant to the Objective of Supervised Learning, we present three characteristics that Self-supervision Task should have and propose a new task called Localization Rotation that satisfies them. We find that the proposed method brings consistent performance improvements on several benchmarks that can test the robustness and generalization capabilities of the Deep Learning model. The paper "Difficulty-Aware Simulator for Open Set Recognition" presents a new method for simulating virtual samples for Open Set Recognition. The Open Set Recognition problem is a problem that identifies a new class of data that has not been experienced in learning and is an essential element of technology for applying artificial intelligence to the real world. Existing methods also generated and utilized virtual samples for model learning, but this paper confirmed that existing technologies are difficult to cope with Open Set samples of various difficulties. We propose a Difficulty-Aware Simulator framework that simulates Open Set samples of various difficulty levels. The proposed technique produced virtual samples by difficulty level from the standpoint of the classifier as intended. Using this, we achieved high performance in the Open Set Recognition field. [Thesis #1 Information] Tailoring Self-Supervision for Supervised Learning WonJun Moon, Ji-Hwan Kim, and Jae-Pil Heo European Conference on Computer Vision (ECCV), 2022 Abstract: Recently, it is shown that deploying a proper self-supervision is a prospective way to enhance the performance of supervised learning. Yet, the benefits of self-supervision are not fully exploited as previous pretext tasks are specialized for unsupervised representation learning. To this end, we begin by presenting three desirable properties for such auxiliary tasks to assist the supervised objective. First, the tasks need to guide the model to learn rich features. Second, the transformations involved in the self-supervision should not significantly alter the training distribution. Third, the tasks are preferred to be light and generic for high applicability to prior arts. Subsequently, to show how existing pretext tasks can fulfill these and be tailored for supervised learning, we propose a simple auxiliary self-supervision task, predicting localizable rotation (LoRot). Our exhaustive experiments validate the merits of LoRot as a pretext task tailored for supervised learning in terms of robustness and generalization capability. [Thesis #2 Information] Difficulty-Aware Simulator for Open Set Recognition WonJun Moon, Junho Park, Hyun Seok Seong, Cheol-Ho Cho, and Jae-Pil Heo European Conference on Computer Vision (ECCV), 2022 Abstract: Open set recognition (OSR) assumes unknown instances appear out of the blue at the inference time. The main challenge of OSR is that the response of models for unknowns is totally unpredictable. Furthermore, the diversity of open set makes it harder since instances have different difficulty levels. Therefore, we present a novel framework, DIfficulty-Aware Simulator (DIAS), that generates fakes with diverse difficulty levels to simulate the real world. We first investigate fakes from generative adversarial network (GAN) in the classifier's viewpoint and observe that these are not severely challenging. This leads us to define the criteria for difficulty by regarding samples generated with GANs having moderate-difficulty. To produce hard-difficulty examples, we introduce Copycat, imitating the behavior of the classifier. Furthermore, moderate- and easy-difficulty samples are also yielded by our modified GAN and Copycat, respectively. As a result, DIAS outperforms state-of-the-art methods with both metrics of AUROC and F-score.
-
- 작성일 2022-09-22
- 조회수 1055
-
- [Research News] DASH Laboratory paper has been approved for publication at the CIKM 2022 International Conference.
- Five papers by DASH laboratory Youjin Shin (SW department), Eun-Ju Park (SW department), Gwanghan Lee (AI department), Lee Han-bin (AI department), Kim Jung-ho (AI department), Saebyeol Shin (Data Science department), Binh M. Le(SW department), and Chingis Oinar (SW department) will be published in October with the approval of the final paper at Conference on Information and Knowledge Management (CIKM) 2022 (BK IF=3), a top-tier international academic conference in artificial intelligence and information retrieval. 1. Time Series-Based Orbit Prediction and Anomaly Detection Study with Aerospace Researchers (Youjin Shin, Eun-Ju Park) 2. Neural Networks Pruning Study (Gwanghan Lee, Saebyeol Shin) 3. A study on the Development of Content Privacy and Hazard-Related Detection Models for USC and YouTube (Binh M. Le, Chingis Oinar, International Joint) 4. An Adversarial Attack Study on CSIRO Data(61) and Time Series Data in Australia (Binh M. Le, International Joint) 5. A study on the Performance Improvement of Various Vision Tasks by Proposing Self-Knowledge Distortion Techniques (Hanbeen Lee, Jeongho Kim) Thanks to students who did exceptional work! Appreciate their efforts ! 1. Youjin Shin, Eun-Ju Park, Simon S. Woo, Okchul Jung and Daewon Chung, ”Selective Tensorized Multi-layer LSTM for Orbit Prediction”, Proceedings of the 31st ACM International Conference on Information & Knowledge Management. 2022. In this work, we propose Selective Tensorized Multi-layer LSTM (ST-LSTM) as a model for predicting the orbit of satellites. Recently, the risk of satellite collisions has increased due to the rapid increase in the number of satellites. To prevent unexpected situations such as satellite collisions, it is important to accurately predict the orbit of the satellite. ST-LSTM selectively applies a tensorizing layer tensorizing the weight matrix of deep learning to the multilayer LSTM. Experiments with various comparative models with data on two real-world satellites provided by the Aerospace Research Institute (KARI) show that ST-LSTM has reduced the computation volume while maintaining high accuracy. Although the collision of space objects not only incurs a high cost but also threatens human life, the risk of collision between satellites has increased, as the number of satellites has rapidly grown due to the significant interests in many space applications. However, it is not trivial to monitor the behavior of the satellite in real-time since the communication between the ground station and spacecraft are dynamic and sparse, and there is an increased latency due to the long distance. Accordingly, it is strongly required to predict the orbit of a satellite to prevent unexpected contingencies such as a collision. Therefore, the real-time monitoring and accurate orbit prediction is required. Furthermore, it is necessarily to compress the prediction model, while achieving a high prediction performance in order to be deployable in the real systems. Although several machine learning and deep learning-based prediction approaches have been studied to address such issues, most of them have applied only basic machine learning models for orbit prediction without considering the size, running time, and complexity of the prediction model. In this research, we propose Selective Tensorized multi-layer LSTM (ST-LSTM) for orbit prediction, which not only improves the orbit prediction performance but also compresses the size of the model that can be applied in practical deployable scenarios. To evaluate our model, we use the real orbit dataset collected from the Korea Multi-Purpose Satellites (KOMPSAT-3 and KOMPSAT-3A) of the Korea Aerospace Research Institute (KARI) for 5 years. In addition, we compare our ST-LSTM to other machine learning-based regression models, LSTM, and basic tensorized LSTM models with regard to the prediction performance, model compression rate, and running time. 2. Gwanghan Lee, Saebyeol Shin, and Simon S. Woo, ”Accelerating CNN via Dynamic Pattern-based Pruning Network”, Proceedings of the 31st ACM International Conference on Information & Knowledge Management. 2022. In this study, we propose a dynamic pruning method that can actually accelerate. Traditional dynamic pruning methods have different sparse patterns for each input sample, making it difficult to achieve actual acceleration due to the additional overhead performed for acceleration. To address this, we propose a new dynamic pruning method, increase the representational power of the convolution kernel, and use the BLAS library to facilitate acceleration. Through this, experiments on CIFAR and ImageNet datasets show that the accuracy is improved compared to the existing SOTA methodology. Most dynamic pruning methods fail to achieve actual acceleration due to the extra overheads caused by indexing and weight-copying to implement the dynamic sparse patterns for every input sample. To address this issue, we propose Dynamic Pattern-based Pruning Network, which preserves the advantages of both static and dynamic networks. Unlike previous dynamic pruning methods, our novel method dynamically fuses static kernel patterns, enhancing the kernel's representational power without additional overhead. Moreover, our dynamic sparse pattern enables an efficient process using BLAS libraries, accomplishing actual acceleration. We demonstrate the effectiveness of the proposed network on CIFAR and ImageNet, outperforming the state-of-the-art methods achieving better accuracy with lower computational cost. 3. Binh M. Le, Rajat Tandon, Chingis Oinar, Jeffrey Liu, Uma Durairaj, Jiani Guo, Spencer Zahabizadeh, Sanjana Ilango, Jeremy Tang, Fred Morstatter, Simon Woo and Jelena Mirkovic, ”Samba: Identifying Inappropriate Videos for Young Children on YouTube”, Proceedings of the 31st ACM International Conference on Information & Knowledge Management. 2022. In this paper, we propose a fusion model called Samba, which uses both metadata and video subtitles to classify YouTube videos for children. Previous studies have used metadata such as video thumbnails, titles, and comments to detect inappropriate videos for children to watch. This metadata-based approach achieves high accuracy but has significant misclassification results due to the reliability of the input feature. By adding representation features in pre-trained subtitles with a self-supervised contrastive framework, the Samba model performs more than 7% better than other SOTA classifiers. It will also release 70,000 videos to encourage future research. In this paper, we propose a fusion model, called Samba, which uses both metadata and video subtitles for content classifying YouTube videos for kids. Previous studies utilized metadata, such as video thumbnails, title, comments, ect., for detecting inappropriate videos for young viewers. Such metadata-based approaches achieve high accuracy but still have significant misclassifications due to the reliability of input features. By adding representation features from subtitles, which are pretrained with a self-supervised contrastive framework, our Samba model can outperform other state-of-the-art classifiers by at least 7%. We also publish a large-scale, comprehensive dataset of 70K videos for future studies. 4. Shahroz Tariq, Binh M. Le and Simon Woo, ”Towards an Awareness of Time Series Anomaly Detection Models' Adversarial Vulnerability”, Proceedings of the 31st ACM International Conference on Information & Knowledge Management. 2022. In this work, we aim to raise awareness of the adversarial vulnerability of an anomaly detector in time series data. Adding some adversarial perturbations to the sensor data shows a serious weakening of the anomaly detection system. The performance of SOTA Deep Neural Networks (DNNs) and Graph Neural Networks (GNNs), which are claimed to be robust against anomalies and can be used in real systems, drops to 0% in adversarial attacks such as Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD). To the best of our knowledge, this work first demonstrates the vulnerability of anomaly detection systems against adversarial attacks. Time series anomaly detection is studied in statistics, ecology, and computer science. Numerous time series anomaly detection strategies have been presented utilizing deep learning. Many of these methods exhibit state-of-the-art performance on benchmark datasets, giving the false impression that they are robust and deployable in a wide variety of real-world scenarios. In this study, we demonstrate that adding modest adversarial perturbations to sensor data severely weakens anomaly detection systems. Under well-known adversarial attacks such as Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD), we demonstrate that the performance of state-of-the-art deep neural networks (DNNs) and graph neural networks (GNNs), which claim to be robust against anomalies and possibly be used in real-world systems, drops to 0%. We demonstrate for the first time, to our knowledge, the vulnerability of anomaly detection systems to adversarial attacks. This study aims to increase awareness of the adversarial vulnerabilities of time series anomaly detectors. 5. Hanbeen Lee, Jeongho Kim and Simon Woo, “Sliding Cross Entropy for Self-Knowledge Distillation”, Proceedings of the 31st ACM International Conference on Information & Knowledge Management. 2022. In this work, we propose Sliding Cross Entropy (SCE), which improves performance by combining with existing self-knowledge distortion. To minimize the difference between the soft target for self-distortion and the output logic of the model, each aligned softmax representation is divided into a specific window and the distance between the divided slices is minimized. This allows the model to equally consider the interclass relationship of soft targets in the optimization process. Through various experiments, the SCE proposed in this paper demonstrates its performance beyond the existing baseline methodology in classification, object detection, and segmentation. Knowledge distillation (KD) is a powerful technique for improving the performance of a small model by leveraging the knowledge of a larger model. Despite its remarkable performance boost, KD has a drawback with the substantial computational cost of pre-training larger models in advance. Recently, a method called self-knowledge distillation has emerged to improve the model's performance without any supervision. In this paper, we present a novel plug-in approach called Sliding Cross Entropy (SCE) method, which can be combined with existing self-knowledge distillation to significantly improve the performance. Specifically, to minimize the difference between the output of the model and the soft target obtained by self-distillation, we split each softmax representation by a certain window size, and reduce the distance between sliced parts. Through this approach, the model evenly considers all the inter-class relationships of a soft target during optimization. The extensive experiments show that our approach is effective in various tasks, including classification, object detection, and semantic segmentation. We also demonstrate SCE consistently outperforms existing baseline methods. Simon S. Woo | swoo@skku.edu | Data-driven AI Security HCI (DASH) Lab | http://dash.skku.edu/
-
- 작성일 2022-09-22
- 조회수 1070
-
- [Research News] Professor Ko Young-joong's Natural Language Processing Laboratory approves the publication of two papers
- The 45th International ACM SIGIR Conference Research and Development in Information Retention (SIGIR) will be published in July 2022 with two papers from Heo Tae-hoon and Park Chung-won's master's degree in artificial intelligence and information search. 1. Choongwon Park, Youngjoong Ko, “QSG Transformer: Transformer with Query-Attentive Semantic Graph for Query-Focused Summarization”, Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2022), July 2022. In this study, we propose a new technique to improve the performance of 'query-based document summaries' that generate query-friendly summaries in documents. The suggestion technique uses multiple natural language processing techniques to connect the words of a query and document to form a single graph, which is used to generate a summary. To efficiently utilize the constructed graph for query-based document summarization, we propose a new graph artificial neural network and paste it into a transformer model. Experiments using two datasets showed that the proposed technique outperformed both previous studies. 2. Taehun Huh and Youngjoong Ko, "Lightweight Meta-Learning for Low-Resource Abstractive Summarization", Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2022), July 2022 In this study, we propose a new technique to improve the performance of 'low resource generation summaries' with fewer labeled learning data. The proposed model uses meta-learning to quickly adapt to its domain using less data. It also addresses the problem of overfitting to less data by enabling only lightweight modules added to existing language models during learning. Experiments with a total of 11 summary datasets resulted in higher Rouge score performance than previous studies. Ko Young-joong | yjko@skku.edu | Natural Language Processing Lab | http://nlp.skku.edu/
-
- 작성일 2022-06-15
- 조회수 1039