CONFERENCE
  VENUE
  FOR AUTHORS
  IMCOM 2024
 
 
  Home
  Topics
  Sponsors
  Conference History
  Journal Recommendation
  Online Conference
  Organizing Committee
  Program Committee
  Keynote Speakers
 
  Conference Venue
  Accommodation
  Travel Information
  Tourist Attractions
  Cuisine
  Call for Papers
  Submission Guidelines
  Registration Guidelines
  Camera Ready Guidelines
   
  Photos
  Montage Video
   
   
   
 
 
Keynote Speakers - 2024


Prof. Thomas C. Henderson
Professor
  University of Utah, United States
Advisory Committee
  IEEE International Conference on Robotics and Systems
Research Interests
  Autonomous Cognitive Systems, Robotics and Computer Vision
Lane-Based Large-Scale Unmanned Aircraft Systems Traffic Management

The FAA and NASA are developing an Urban Air Management concept as part of the Advanced Air Mobility (AAM) program defining an Unmanned Aircraft Systems (UAS) Traffic Management (UTM) architecture. The combined scale and density of the expected air traffic, as well as the algorithmic complexity of maintaining safe separation, are driving a consensus that a structured airspace will eventually be required. Against this background, a lane-based airspace structure is proposed here whose motivation is to reduce the computational complexity of strategic deconfliction by providing UAS agents with a set of pre-defined airway corridors called lanes. To achieve complexity reduction, an airspace is defined which is composed of a directed graph where every node has either input or output degree equal to one, and flight plans consist of a scheduled sequence of lane traversals. The major results are: (1) the creation and layout of lane structures, (2) an efficient lane-based strategic deconfliction scheduling algorithm, (3) lane-network performance analysis tools, and (4) a tactical deconfliction protocol to handle dynamic contingencies (e.g., failure to follow the nominal flight plan). In conclusion, this approach provides efficient scheduling of safe flight paths, straightforward analysis of stream properties of the transportation system, an effective contingency handling protocol, and scalability to thousands of flights over urban areas.

Thomas C. Henderson received his BS in Math with Honors from Louisiana State University in 1973 and his PhD in Computer Science from the University of Texas at Austin in 1979. He is currently a full Professor in the School of Computing at the University of Utah. He has been at Utah since 1982, and was a visiting professor at DLR in Germany in 1980, and at INRIA in France in 1981 and 1987, and at the University of Karlsruhe, Germany in 2003 and 2011, and was a Program Director at the National Science Foundation in 2010. Prof. Henderson was chairman of the Department of Computer Science at Utah from 1991-1997, and was the founding Director of the School of Computing from 2000-2003. Prof. Henderson is the author of Discrete Relaxation Techniques (University of Oxford Press, 1990), Computational Sensor Networks (Springer, 2009), Analysis of Engineering Drawings and Raster Map Images (Springer, 2014), and Lane-Based UAS Traffic Management (Springer 2021) and editor of Traditional and Non-Traditional Robotic Sensors (Springer-Verlag, 1990); he has served as Co-Editor-in-Chief of the Journal of Robotics and Autonomous Systems and as an Associate Editor for the IEEE Transactions on Pattern Analysis and Machine Intelligence and IEEE Transactions on Robotics and Automation. His research interests include autonomous cognitive systems, robotics and computer vision, and his ultimate goal is to help realize functional androids. He has produced over 250 scholarly publications, and has been principal investigator on over $13M in research funding. Prof. Henderson is a Fellow of the IEEE, and received the Governor's Medal for Science and Technology in 2000. He enjoys good dinners with friends, reading, playing basketball and hiking.



Dr. Josu Bilbao
Director of Electronics, Information and Communication Technologies
   IKERLAN, MONDRAGON Corporation, Spain
Technical Committee
  IEEE Transactions on Communications
  IEEE Communications Magazine
  IEEE Globalcom
Research Interests
  IoT and IIoT Security, Network Coding, Cloud and Fog-based Architectures, Edge Computing, 5G and Quantum Computing
What will digital technologies bring to the Industry? Collaborative Artificial Intelligence: a lever for the technological revolution.

We are facing what could be the greatest revolution of our era. Over the last decade, the proliferation of Industry 5.0 projects and the paradigm of the Industrial Internet of Things have filled the ecosystems of many industries with data about their processess, machines performance, etc. In turn, the democratization of computing power is allowing us to process ML models both at the Edge and on IoT devices. If we add these circumstances to the emergence of new paradigms in Artificial Intelligence, which aim to provide privacy, trustworthiness, and sustainability to the AI, we are facing the "perfect storm" for a new industrial revolution. During the talk, the main concepts and techniques that are enabling the Industry to navigate the path of change towards the aforementioned technological revolution will be reviewed, dealing with new collaborative AI schemes, federated learning, and new data space architectures.

Josu Bilbao obtained the Telecommunication Engineering degree from the Faculty of Engineering of Bilbao (UPV/EHU), the M.Sc. degree in Communications and Control from UPV/EHU, and the Ph.D degree in Computer Science from the University of Navarra. He is the Director of Electronics, Information and Communication Technologies (EICT) at IKERLAN, a private research center leader in technology transference and part of MONDRAGON Corporation (one of the largest Spanish business groups). The EICT unit consists of ~220 full-time researchers and covers the following technological departments: hardware platforms & communication systems, dependable (safety) and real-time embedded systems, industrial cyber-security, information & communication technologies (ICT), including digital platforms (Edge-Cloud), IoT, artificial intelligence and quantum computing. Dr. Bilbao has been responsible for the development of the strategy and development of Digital Platforms for multiple sectors, including: Intelligent Transportation, Energy generation-storage and Distribution, Capital Goods, Industry 4.0, and Consumer Electronics. He has led the development of multiple industrial projects, including the design and development of embedded systems with customized connectivity solutions. Dr. Bilbao, IEEE Senior Member, is also member of multiple international technical committees for the evaluation of scientific works and papers (IEEE Trans. on comms., IEEE Comm. Magazine, IMETI, ETFA, IEEE Globecom, JSCI, Smart Grids IoT, etc.), and is the main author of multiple journal and international peer-reviewed papers in the field of reliable communications and fog-to-cloud based architectures. Dr. Bilbao has been a visiting scientist at MIT and received the best paper award at IEEE ISPLC. He plays an active role in different standardization committees (IEEE, HANA, ETG, IETF, etc.), and his current research interests span several fields such as the safety and security areas for the IoT and IIoT, reliable communications, network coding, QoS, real-time CPS integration in the IoT, Cloud and Fog-based architectures, Artificial Intelligence, data analytics, Edge computing, 5G and Quantum Computing among others.



Prof. Choong Seon Hong
Professor
  Kyung Hee University, Korea
Senior member, IEEE
Research Interests
  Future Internet, Intelligent Edge Computing, Network Management, Network Security
AI for Management of Future Wireless Networks

This keynote provides a comprehensive overview of recent advancements in AI and generative AI applied to wireless networks, with a specific focus on 6G networks. The key topics covered include network administration, edge computing, non-terrestrial networks (NTNs), content generation, and collective intelligence. We explore how 6G networks aim to achieve global connectivity by integrating terrestrial and NTNs, such as satellite, UAV, HAP and Satellite-based networks. NTNs, operating in spaceborne and airborne environments, present unique challenges in terms of propagation, latency, and mobility. To address these challenges, we introduce AI techniques that adapt to NTN conditions. We start by giving an overview of NTNs in the context of 6G, highlighting their importance. We then discuss the role of AI in enhancing network planning, resource allocation, and interference management. We also examine the challenges and opportunities of AI-powered NTN implementations in 6G networks. Finally, we explore the potential of multi-agent generative AI in wireless networks, emphasizing how it can synergize with large language models (LLMs), edge networks, and multi-agent systems. We envision self-governed networks where on-device LLMs collaborate to achieve network goals and discuss the limitations of cloud-based LLMs from a game-theoretic perspective.

Choong Seon Hong (Fellow of IEEE) received the B.S. and M.S. degrees in electronic engineering from Kyung Hee University, Seoul, South Korea, in 1983 and 1985, respectively, and the Ph.D. degree from Keio University, Tokyo, Japan, in 1997. In 1988, he joined KT, Gyeonggi-do, South Korea, where he was involved in broadband networks as a member of the Technical Staff. Since 1993, he has been with Keio University. He was with the Telecommunications Network Laboratory, KT, as a Senior Member of Technical Staff and as the Director of the Networking Research Team until 1999. Since 1999, he has been a Professor with the Department of Computer Science and Engineering, Kyung Hee University. His research interests include future Internet, intelligent edge computing, network management, and network security. Dr. Hong is a member of the Association for Computing Machinery (ACM), the Institute of Electronics, Information and Communication Engineers (IEICE), the Information Processing Society of Japan (IPSJ), the Korean Institute of Information Scientists and Engineers (KIISE), the Korean Institute of Communications and Information Sciences (KICS), the Korean Information Processing Society (KIPS), and the Open Standards and ICT Association (OSIA). He has served as the General Chair, the TPC Chair/Member, or an Organizing Committee Member of international conferences, such as the Network Operations and Management Symposium (NOMS), International Symposium on Integrated Network Management (IM), Asia-Pacific Network Operations and Management Symposium (APNOMS), End-to-End Monitoring Techniques and Services (E2EMON), IEEE Consumer Communications and Networking Conference (CCNC), Assurance in Distributed Systems and Networks (ADSN), International Conference on Parallel Processing (ICPP), Data Integration and Mining (DIM), World Conference on Information Security Applications (WISA), Broadband Convergence Network (BcN), Telecommunication Information Networking Architecture (TINA), International Symposium on Applications and the Internet (SAINT), and International Conference on Information Networking (ICOIN). He was an Associate Editor of the IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT and the IEEE JOURNAL OF COMMUNICATIONS AND NETWORKS and an Associate Editor for the International Journal of Network Management and an Associate Technical Editor of the IEEE Communications Magazine, and guest editor of IEEE Network Magazine. He currently serves as an Associate Editor for the International Journal of Network Management and Future Internet Journal.



Prof. Abdul Wahab Bin Abdul Rahman
Professor
   International Islamic University Malaysia, Malaysia
Research Interests
  Computational Intelligence, Artificial Intelligence, Neuro-physiological Computation
Analysing Mental Health State Based on NeuroPhysiological Interface of Affect

The lack of easily available psychological instruments to perform accurate prediction of mental health state, forces individual not to realize their state of mental health or brain developmental disorder until it is too late. The availability of Electroencephalogram (EEG) devices and its ability to measure and capture brain waves for analysis makes it easier for researchers to use them in understanding the functionality and state of the brain. The mobility and low cost EEG devices makes it attractive for researchers and managers to profile employees for a more effectives training needs. In the case of brain developmental disorder it becomes important to have early detection as early intervention can help individual to lead a more normal life. Here we show some of the examples of our researches and analysis in understandings the functionality of the brain through affective psychological understanding. The neuro- physiological interaction of affect framework allow us to analyze and predict behavior through personality traits, which provides new avenue and possibilities of profiling individual effectively. In addition unknown addiction problem and learning disbailities can also be detected for early intervention.

Abdul Wahab Bin Abdul Rahman received the B.S. degree in electronic engineering from University of Essex, England in 1979. Then, he got the M.S. degree in electronic engineering from National University of Singapore in 1987. He received his Ph.D. degree in computer engineering Nanyang Technological University, Singapore in 2004. He started his career with Hewlett Packard Singapore as a production engineer and was then given the post of R&D Project Manager since 1982. He has worked in Singapore and Colorado, USA prior to joining the faculty member of Nanyang Technological University, Singapore in 1990. In 2009 He joined the Kulliyyah of Information and Communication Technology, International Islamic University, Malaysia. Professor Abdul Wahab had published more than 100 conferences, Journal, patent and book chapters in areas of digital and optical computing, signal processing, Artificial Intelligence, and Neuroscience and computing. He has taught the digital system design courses, computer organizations and architectures, research methodologies, industrial attachment coordinator. His areas of expertise covers digital system design based on reconfigurable logics, speech processing especially in the areas of speech enhancement, speech recognition and speaker identification, and integrating signal processing with Fuzzy neural networks (especially in the areas of cerebellum). His current research are in the areas of understanding and analyzing brain developmental disorder using the EEG and ECG as neuro-cardio physiological data modeling of the brain and the heart. Professor Abdul Wahab is also the board member of Mercy Relief Singapore since 2003 and was also the vice chairman of mercy relief Singapore from 2008 to 2012. In 2008 he received the Friends and goodwill award from the Singapore SOKA Association for humanitarian contribution and a long service award from the Ministry of Community Development and Sports (MCDS) on 2004. He was also the Winner of the Rotary – ITE Alumni Professional Achievement Award in 2003, which was also named as the Paul Harris Fellow of the Rotary Foundation of the Rotary International. In 2006 he was awarded the most popular teacher award in school of computer engineering, NTU. Professor Abdul Wahab was the advisor to the Singapore Malay Chamber of commerce (1989 – 1993). As chairman and supervisor to the IRSYAD School, Singapore, (1996 – 2000), he transform the school to a more English speaking, science educated students. He was also the Council member of Islamic Religious Council of Singapore (MUIS), (1997 – 2004).In 1993 to 1994 professor Abdul Wahab was the Consultant to OMNI DESIGN Singapore Pte. Ltd, for the design, development and project management of an ergonomic keyboards and a 3D pointing device and in 1994 – 1999, he was the Chairman to Technical Committee review SS337: 1989 – safety of information technology equipment including business equipment (TC 118/4/8R), SISIR, Singapore.



Prof. Atsuyuki Morishima
Professor
   University of Tsukuba, Japan
Steering Committee
   iSchools, ICADL, DASFAA and ACM/IEEE JCDL
Research Interests
  Human-in-the-loop Data Systems, Data Integration, and Digital Libraries
Open-World Information Management with Earth-Scale Human-ML-Logic Teams

This talk addresses the interaction of humans, AIs and logics through the results of our recent crowdsourcing research projects, in which the closed-world assumption does not hold and workers include not only humans but also “AI workers” that complete tasks in cooperation with them. In the talk, we address several different settings of the open world information management problems and introduce some of the approaches and our findings. The research results were partly implemented on our all-academic crowdsourcing platform Crowd4U, which has been used for a variety of real-world projects in different domains involving many universities and organizations.

Atsuyuki Morishima received the B.S. degree in information science, M.S. and Ph.D. degrees in Engineering from University of Tsukuba, Japan in 1993, 1995, and 1998, respectively. He is now a professor of Institute of Library, Information and Media Science and Center for Artificial Intelligence Research, University of Tsukuba, Japan. He is currently serving as an associate dean of the institute and the Asia-Pacific regional chair of iSchools. His research interests include human-in-the-loop data systems, data integration and digital libraries. He has been involved in many real-world crowdsourcing projects on digital libraries, digital archives, natural disaster responses and smart cities. He served as the leader of JST CREST CyborgCrowd Project and currently, leads the Crowd4U initiative and the "computational division of labor" project. His papers have been listed in the best papers and runner-ups in the ACM SIGMOD (2001), IPSJ Journal (2004), CAiSE (2015), Emerald Journals (2018), DBSJ Journal (2012, 2021), and ACM WebSci (2022) and ICADL (2022). He has been contributing to organizations and programs of conferences and journals in the database, crowdsourcing, digital libraries and information science communities, such as VLDB, HCOMP, ICADL, JCDL and iSchools.




Keynote Speakers - 2023


Dr. Surya Nepal
Research Group Leader/Senior Principal Research Scientist
  Distributed Systems Security, CSIRO’s Data61
Deputy Research Director
  Cyber Security Cooperative Research Centre (CSCRC)
Editorial board
  IEEE Transactions on Services Computing (TSC)
  IEEE Transactions on Dependable and Secure Computing (TDSC)
  ACM Transactions in Internet Technology (TOIT)
  Frontiers in Big Data (Data Privacy and Cyber Security)
Research Interests
  Data Privacy, Cyber Security, Distributed Systems
Security and Privacy of AI Systems

AI/ML technology has the potential to bring significant benefits to the economy and society. It is a tremendous promise. The technology has been developed, deployed, and adopted in many real-life critical applications to fulfil its promise. It helps us drive cars, doctors to diagnose, employers to hire people, governments to create policies, make our cyberspace secure and safe, and address the skill shortage through automation. However, it also introduces significant risks that need to be managed. For example, Backdoor attacks insert hidden associations or triggers to the deep learning models to override correct inferences such as classification and make the system perform maliciously according to the attacker-chosen target while generally behaving normally in the absence of the trigger. In addition, it has been demonstrated that ML models learn more than necessary from the data and endanger individuals’ privacy. Hence, AI/ML systems must have the properties of trustworthy computing, such as security and privacy. This talk first provides a brief overview of security and privacy issues in deep neural networks, then presents recent efforts in building trustworthy deep neural networks, and finally, some challenges and opportunities.

Surya Nepal is a Senior Principal Research Scientist at CSIRO Data61 and deputy research director of the Cyber Security Cooperative Research Centre (CRC). He has been working at CSIRO since 2000. His main research interest is in the development and implementation of technologies in the area of distributed systems, including Web Services, Cloud Computing, and Big Data, with a specific focus on security, privacy, and trust. He obtained his B.E. from the National Institute of Technology, Surat, India; M.E. from Asian Institute of Technology, Bangkok, Thailand; and PhD from RMIT University, Australia. At CSIRO, he researched in the area of multimedia databases, web services, and service- oriented architectures, social networks, and security, privacy and trust in a collaborative environment and cloud systems, and big data platforms. He has more than 300 publications to his credit. Many of his works are published in top international journals and conferences such as ACM CCS, NDSS, ASIA CCS, RAID, ACM MM, VLDB, ICDE, ICWS, SCC, CoopIS, ICSOC, IEEE Transactions on Service Computing, IEEE Transactions on Parallel and Distributed Systems, ACM Transaction on Internet Technology, IEEE Transactions on Computers, Communications of the ACM and ACM Computing Survey. Some of his papers have received the best paper award in international conferences such as IEEE CCGRID, IEEE Big Data, WISE, etc. His lifetime citation is 12251 (Google Scholar), with an h-index 51 and i10-index 210. Dr Nepal has received several publications and invention awards at CSIRO.



Mr. Mohan Rao Goli
CVP & CTO
  Samsung Research Institute India - Bangalore, India
Research Interests
  5G/6G
Evolution of Wireless Networks for the Beyond 5G ERa

Ever since the World first 5G service got commercially launched in South Korea in the year 2019, the research community around the world started thinking on what lies beyond 5G. 5G standardization not only aimed to address the peak and average data rates but also the latency needs of various low latency use cases. 5G also considered the needs of machines as users apart from human needs. A trend that started during the 4G era itself but now getting accelerated with 5G deployments is the trend of network feature virtualization. A virtual network which can be deployed in cloud makes it easier to operate and scale up, scale down quickly as per the needs. Intelligence will have a big role to play once the automation and scalability needs to happen on run-time in open cloud networks to ensure that the experience of multiple services is jointly optimized. On a whole these trends are helping operators to reduce their operational expenses as well as capital expenses. During 4G and 5G development, the role of AI was limited to the system management, operations and maintenance purposes. It was a peripheral function and the standards never assumed the presence of intelligent decision making abilities in the network nodes or in the terminals. Going forward this will change as the 6G network design is being expected to be AI-native. 6G is expected to bring life to new use-cases like immersive XR, holographic communication and digital replicas. Hence the cloud-native and AI-native evolution of Beyond 5G networks is important to meet the performance requirements, architectural requirements which will define the 6G for the future.

Mohan Rao Goli holds a MBA degree from IIM-Bangalore and a BE degree from Nagarjuna University. He has received numerous awards like Zinnov’s “High impact Global role” award in 2019, Zinnov’s “Best in Class CoE” award in 2022 and the "Best Overseas R&D Employee" twice from the President of Samsung Electronics. Mohan is the Corporate Vice President and Chief Technology Officer at Samsung Research Institute India - Bangalore (SRI-B). He has spent over 24 years with Samsung in various capacities. He currently heads the Communication R&D encompassing the 5G/6G standards and pre-standards research as well as the 5G terminal and network solution development. He lead the 5G pre-standards research and testbed development at Samsung, India which finally culminated with the 5G commercial launch in world-wide markets. His team has just embarked on the 6G research and testbed development. Prior to this he spearheaded global 4G and 5G R&D programme for Samsung Mobile business closely collaborating with the eco-system partners across the world. He and his team have been key contributors in the development of communication services like IMS, RCS and MC-PTT Mohan is also a key proponent of Open-source development. In addition to the Virtual core and Virtual RAN solutions, his team also contributes actively to the Open-RAN development. As CTO, he also oversees the overall technology strategy of Samsung India. He has over 20 inventions in the communication networks space and his current areas of interest include Next generation communication systems and AI-in-Wireless.



Prof. Hyunjin Park
Professor
  School of Electronic and Electrical Engineering, Sungkyunkwan University, Korea
Editorial board
  Biomedical Engineering Letters
  Medicine
  Scientific Reports
  Frontiers in Neuroscience
Research Interests
  Medical Image Registration and Segmentation, Medical Image Analysis for Cancer Management, Neuroimaging Analysis
Medical Image Analysis Using Machine Learning

Machine learning is a disruptive technology that is making a significant impact in many scientific disciplines and the field of medicine is no different. Machine learning, especially the deep learning variant, has been playing a crucial role in analyzing various medical imaging modalities such as computed tomography and magnetic resonance imaging. Rapid developments in computer vision technology are rich sources of innovation for medical image analysis, but they cannot be applied blindly to the medical domain. Domain-specific adaptations such as sample imbalance, organ conditions, and imaging physics are needed. This talk introduces our studies of medical image analysis using machine learning. 1) Radiomics is a general framework to extract high-dimensional handcrafted features that is widely used in radiology research. Here, I will discuss how to combine data-driven deep learning features with existing the radiomics framework in the lung imaging context. 2) Graph theory is a pervasive technology and is widely used in neuroimaging analysis. Here, I will discuss how machine learning can be used for mining biomarkers for various brain conditions within the graph framework. 3) Joint analysis of high-dimensional data has significant benefits in revealing new latent dimensions. Here, I will describe a framework to jointly analyze two types of representative high-dimensional data in medicine (imaging and genomic data) known as imaging genetics.

Hyunjin Park received a B.E. in electrical engineering from Seoul National University, Seoul, Korea in 1997, an M.S. in electrical engineering from University of Michigan in 2000, and a Ph.D. in biomedical engineering from University of Michigan, Ann Arbor, USA in 2003. He was a research faculty with University of Michigan Hospital in the Department of Radiology. He is currently a Professor of electrical engineering at Sungkyunkwan University, Suwon, Korea. His research interest includes image processing methods for medical imaging, medical image analysis for cancer management, and computer vision application for medical imaging. He is on the editorial boards of several SCIE index papers with 150+ papers, 4166 citations, and an h- index of 33 (google scholar) as of October 2022.



Prof. Yoshiharu Ishikawa
Professor
  Graduate School of Informatics, Nagoya University, Japan
Associate Editor
  ACM/IMS Transactions on Data Science
  The VLDB Journal
Fellow
  AInformation Processing Society of Japan (IPSJ)
  The Institute of Electronics, Information and Communication Engineering (IEICE)
Research Interests
  Spatio-temporal Databases, Data Mining, Information Retrieval, Web Information Systems
Approximate Database Query Processing with Error Guarantees

In recent years, query processing in databases has become more important with the increase in data and the sophistication of analysis requirements. Research on approximate query processing (AQP) has been conducted to efficiently execute queries on big data and significantly reduce response time. However, without theoretical guarantees for the quality of the approximation, the query results cannot be fully trusted. Based on this background, we are working on a research topic called bounded approximate query processing. It is based on a novel synopsis construction method that focuses on aggregate queries and enables error-guaranteed efficient query processing. In this talk, the trends in research on AQP are briefly reviewed first, then our approach to bounded query processing is explained with the experimental results.

Yoshiharu Ishikawa received the B.S., M.E., and Dr.Eng. degrees in computer science, all from University of Tsukuba, in 1989, 1991, and 1995, respectively. He joined Nara Institute of Science and Technology (NAIST) in 1994 as an associate professor. In 1999, he moved to University of Tsukuba, where he later became an associate professor. In 2006. he joined Information Technology Center, Nagoya University as a professor, and he moved to Graduate School of Informatics at the same university in 2013. During this period, he was a visiting scholar at University of Maryland, College Park and Carnegie Mellon University in from 1997 to 1998 and a visiting professor at National Institute of Informatics (NII) from 2010 to 2015. His research interests include databases, big data, data mining, and scientific databases. In recent years, he has been particularly interested in database query processing, indexing methods, spatio-temporal databases, and stream processing. He has made many contributions to society, especially in the field of databases. His major contributions include program co-chair of DASFAA 2010, general co-chair of VLDB 2020, and an associate editor of the VLDB Journal (2017-now). He is an IEICE Fellow and an IPSJ Fellow. He is a member of ACM, IEEE CS, IEICE, IPSJ, and DBSJ.


Keynote Speakers - 2022


Prof. Sam Kwong
Chair Professor
  Department of Computer Science, City University of Hong Kong, China
Associate Editor
  IEEE Transactions on Industrial Informatic
  IEEE Transactions on Industrial Electronics
  IEEE Transactions on Evolutionary Computation
  Journal of Information Science
Research Interests
  Video Coding, Evolutionary Computation, Machine Learning and Pattern Recognition
Enhancing Video Coding by Data-driven Techniques and Advanced Models

In June 6th 2016, Cisco released a White paper, VNI Forecast and Methodology 2015-2020, reported that 82 percent of Internet traffic will come from video applications such as video surveillance, content delivery network, so on by 2020. It also reported that Internet video surveillance traffic nearly doubled, Virtual reality traffic quadrupled, TV grew 50 percent and similar increases for other applications in 2015. The annual global traffic will first time exceed the zettabyte (ZB; 1000 exabytes[EB]) threshold in 2016, and will reach 2.3 ZB by 2020. It implies that 1.886ZB belongs to video data. Thus, in order to relieve the burden on video storage, streaming and other video services, researchers from the video community have developed a series of video coding standards. Among them, the most up-to-date is the Versatile Video Coding (VVC), which has successfully halved the coding bits of its predecessors, without significant increase in perceived distortion. With the rapid growth of network transmission capacity, enjoying high definition video applications anytime and anywhere with mobile display terminals will be a desirable feature in the near future. Given the significant advances in multimedia and communication technologies, numerous video applications, such as video streaming and video conference, have been brought into the industry and occupy the primary Internet traffic. Performing stable and high-quality streaming services in constrained scenarios is challenging as they are sensitive to the time delay and bandwidth fluctuation. Owing to the increasing demand for high online visual quality, several dynamic adaptive streaming techniques have been proposed to provide low-latency and high-quality video services. As the ultimate consumer of the video stream is the end-user, the perceptual characteristics should be fully considered in video transmission. However, most of the existing algorithms do not consider video rate and transmission control with subjective factors, resulting in quality fluctuation and unnecessary bandwidth waste, which has led to emerging research on the rate control and transmission optimization for dynamic adaptive streaming. In this talk, I will present the most recent research results on machine learning and game theory based video coding. This is very different from the traditional approaches in video coding. We hope applying these intelligent techniques to vide coding could allow us to go further and have more choices in trading off between cost and resources. We will present a perceptual-based rate control optimization in high efficiency video coding (HEVC) and the design of perceptual-based dynamic adaptive video transmission optimization to focus on the two major problems in video streaming which aims to achieve a balance between visual quality, quality, and buffer smoothness under the constraints.

Sam Kwong received the B.Sc. degree from the State University of New York at Buffalo, Buffalo, NY, in 1983, the M.A.Sc. degree in electrical engineering from the University of Waterloo, Waterloo, ON, Canada, in 1985, and the Ph.D. degree from the FernUniversität Hagen, Hagen, Germany, in 1996. From 1985 to 1987, he was a Diagnostic Engineer with Control Data Canada, where he designed the diagnostic software to detect the manufacture faults of the VLSI chips in the Cyber 430 machine. He later joined the Bell Northern Research Canada as a Member of Scientific Staff, where he worked on both the DMS-100 voice network and the DPN-100 data network project. In 1990, he joined the City University of Hong Kong as a Lecturer in the Department of Electronic Engineering. He is currently a Chair Professor in the Department of Computer Science. He coauthored three research books on genetic algorithms, eight book chapters, and over 200 technical papers. He has been a consultant to several companies in telecommunications. Currently, he is the Associate Editor for the Transactions on Industrial Informatic, the IEEE Transactions on Industrial Electronics, IEEE Transactions on Evolutionary Computation, the Journal of Information Science. Prof. Kwong was elevated to IEEE fellow for his contributions on Optimization Techniques for Cybernetics and Video coding in 2014. He is currently the President Elect of IEEE Systems, Man and Cybernetics from January 2021.



Prof. Seong-Whan Lee
Hyundai-Kia Motor Chair Professor
  Department of Brain and Cognitive Engineering, Korea University, South Korea
Research Interests
  Medical Signal Processing, Electroencephalography,Brain-computer Interfaces, Signal Classification, Neurophysiology, Feature Extraction, Convolutional Neural Nets, Biomechanics Learning
Brain-to-Speech: Speech Synthesis from Neural Signals of Imagined Speech

Brain-computer interface (BCI) is a technology that enables communication with external devices by converting brain signals into computer commands. This technology, converged with artificial intelligence, is considered as a core technology to lead the fourth industrial revolution. Brain-to-speech refers to a brain signal-mediated communication system that converts brain activities of imagined speech into audible speech. Current BCI speller systems have shown promising results, however, the concept of brain-to-speech has attracted attention as an intuitive BCI communication method because it directly connects neural activity to the means of human linguistic communication. It has considerable potential to explore new fields of intuitive brain decoding which may greatly enhance the naturalness of communication using brain signals. With the current discoveries on neural features of imagined speech and the development of the speech synthesis technologies, direct translation of brain signals into speech has shown significant promise. This talk introduces the current brain-to-speech technology with the possibility of speech synthesis from non-invasive brain signals, which may ultimately facilitate silent communication through brain signals.

Seong-Whan Lee is the Hyundai-Kia Motor Chair Professor at Korea University, where he is the head of the Department of Artificial Intelligence and the director of the Institute of Artificial Intelligence at Korea University. He received the B.S. degree in computer science and statistics from Seoul National University, Seoul, Korea, in 1984, and the M.S. and Ph.D. degrees in computer science from Korea Advanced Institute of Science and Technology in 1986 and 1989, respectively. From February 1989 to February 1995, he was an Assistant Professor in the Department of Computer Science at Chungbuk National University, Cheongju, Korea. In March 1995, he joined the faculty of the Department of Computer Science and Engineering at Korea University, Seoul, Korea, and now he is the tenured full professor. In 2001, he stayed at the Department of Brain and Cognitive Sciences, MIT as a visiting professor. A Fellow of the IAPR (1998), IEEE (2010), and the Korean Academy of Science and Technology (2009), he has served several professional societies as chairman or governing board member. He was the founding Editor-in-Chief of the International Journal of Document Analysis and Recognition and has been an Associate Editor of several international journals: Pattern Recognition, ACM Trans. on Applied Perception, IEEE Trans. on Affective Computing, Image and Vision Computing, International Journal of Pattern Recognition and Artificial Intelligence, and International Journal of Image and Graphics. He was the founding president of the Korean Society for Artificial Intelligence. His research interests include pattern recognition, artificial intelligence, and brain engineering. He has more than 540 publications in international journals and conference proceedings, and authored 10 books.



Prof. Paolo Fiorini
Professor
   Department of Computer Science, University of Verona, Italy
Member
   Technical Staff at NASA Jet Propulsion Laboratory, California Institute of Technology, US
Honorary Professor
   Obuda University, Italy
Research Interests
  Cyber Physical Systems, Integration of Computing, Control and Communication, Applied Computing, System Modeling and Control, Robotics, Man-Machine Interfaces.
Robotic systems for autonomous surgery

Robotic systems are increasingly used in surgical procedures and, in particular, in minimally-invasive surgeries to improve quality and procedure consistency. Current surgical robots are teleoperated devices, whose only automation capabilities are in image processing and tremor suppression. Since many surgical actions are repetitive, surgical procedures could benefit from some degree of autonomy in which specific tasks are carried out by the robot. Research in autonomous robots is advancing and many commercial surgical robot systems could soon embed semi-autonomous feature. An additional motivation of automation in surgery is to compensate for human errors, since safety in complex surgical robots highly depends on the information flow among all the involved agents. After reviewing the state of the art in autonomy for robot-assisted surgery, this talk will discuss the meaning of autonomy, which is used to indicate different functions. To achieve an autonomous behavior, artificial intelligence algorithms must be embodied into a robotic host and many challenges have to be solved to achieve the necessary cognitive and manual skills. Tasks required from an autonomous surgical robot include decision making, environment understanding, action planning and successful execution. This talk will summarize the results achieved by the robotics group at the University of Verona in developing new methods for knowledge representation, task planning, motion control learning, and situation awareness. The novel algorithms have been integrated into our laboratory set-up based on the da Vinci Research Kit (dVRK) and demonstrated in examples of surgical actions and training tasks.

Paolo Fiorini received the Laurea degree in Electronic Engineering from the University of Padova, (Italy), the MSEE from the University of California at Irvine (USA), and the Ph.D. in ME from UCLA (USA). From 1985 to 2000, he was with NASA Jet Propulsion Laboratory, California Institute of Technology, where he worked on autonomous and teleoperated systems for space experiments and exploration. In 2001 returned to Italy at the School of Science of the University of Verona (Italy) where is a Full Professor of Computer Science. His research focuses on teleoperation for surgery, space, service and exploration robotics, and autonomous navigation of robots and vehicles. In 2001 he founded the ALTAIR robotics laboratory, which has been awarded several EU and Italian grants, including projects on robotic surgery, such as Accurobas, Safros, Isur, and Eurosurge. In 2009, he founded the company Surgica Robotica for the development of a new surgical robot for abdominal surgery that received the CE certification in 2012. He is an IEEE Fellow (USA, 2009), Corresponding Member of the Academy of Agriculture, Sciences and Letters (Verona, 2015), and Honorary Professor of Obuda University (Budapest, 2016).



Prof. Masatoshi Yoshikawa
Professor
  Graduate School of Informatics, Kyoto University, Japan
Steering Committee Member
  International Conference on Big Data and Smart Computing
  IEEE Technical Committee on Data Engineering
Research Interests
  Privacy Protection and Data Valuation, Metadata Systems for Information on Earth Environment
Utilization of Privacy-Protected Personal Data

Our daily lives would not be possible without the services that we enjoy in exchange for providing personal data, such as search engines, Web advertisements, and restaurant searches. Personal data, is useful resources for our society. The construction of data ecosystems that maximize the utility of individuals and society as a whole while keeping privacy of personal data under their control is an important issue for the future of society. In this talk, we introduce technologies for utilizing privacy-protected personal data and their applications. The talk also covers basic concepts on personal data market where individuals can sell their personal data and receive appropriate compensation for their potential privacy loss.

Masatoshi Yoshikawa received B.E., M.E. and Ph.D. degrees from Department of Information Science, Kyoto University in 1980, 1982 and 1985, respectively. He was on the faculty of Kyoto Sangyo University from 1985 until 1993. From 1989 to 1990, he was a visiting scientist at Computer Science Department, University of Southern California. In 1993, he joined Nara Institute of Science and Technology (NAIST) as a faculty member. From April 1996 to January 1997, he has stayed at Department of Computer Science, University of Waterloo as a visiting associate professor. From June 2002 to March 2006, he served as a professor at Nagoya University. From April 2006, he has been a professor at Kyoto University.



Prof. Philippe Martinet
Research Director
   Inria Center at Côte d'Azur University, France
Research Interests
  Robotics; Visual Servoing; Vision; Parallel Robot; Humanoid.
Proactive Autonomous Navigation in Human Populated Environment

Autonomous navigation in human populated environment is difficult as it is facing the freezing robot problem where generally reactive techniques fail. With a certain level of density, there is no solution if we do not take into account the future evolution of the environment over a time horizon. There are different aspects to consider in this global problem. Human (or simply pedestrian with or without an electrical mobility devices) behaviors require to be observed or learnt in order to predict their evolution. An accurate and realistic model of such agent is necessary. In that area, recent advances have been done in by enhancing the classical Social Force Model. The second aspect concerns the observation as human represents an Hidden dimension and the question is what is necessary and can be observed? The third aspect deals with the control in order to monitor the action of the robot. The question is: what is the best action to do taking into account the knowledge we have and the observation we do in order to join a particular place in the human populated environment? During the presentation, I will present the recent advances that we have done in different research projects.

Philippe Martinet graduated from the CUST, Clermont- Ferrand, France, in 1985 and received the Ph.D. degree in electronics science from the Blaise Pascal University, France, in 1987. From 1990 to 2000, he was assistant Professor with CUST. From 2000 until 2011, he has been a Professor with Institut Franc ̧ais de Mécanique Avancée (IFMA), Clermont-Ferrand. In 2006, he was a visiting professor at the Sungkyunkwan university in Suwon, South Korea. In September 2011, he moves to Ecole Centrale de Nantes and LS2N. In November 2017, he moves to Inria Sophia Antipolis as Research director. His research interests include visual servoing of robots, multi-sensor-based control, force vision coupling, autonomous guided vehicles, modeling, identification and control of complex machines. From 1990, he is Author and Coauthor of around four hundred references.



Prof. Minyi Guo
Head
   Department of Computer Science and Engineering, Shanghai Jiao Tong University, China
Director
   Embedded and Pervasive Computing Center
Research Interests
  Parallel and Distributed Processing; Parallelizing Compilers; Cloud Computing; Pervasive Computing; Software Engineering, Embedded Systems; Green Computing; Wireless Sensor Networks.
Cloud Computing for Sprinting Peak Services

Many internet applications have the characteristics of “sprinting peak load”, that is the requests could be significantly increased thousand times in adjacent time unit. For example, Wechat red packet on new year’s eve, and Alibaba “Double Eleven” shopping carnival of e-commence platforms are such kind of applications. To support these internet services the traditional cloud systems could not satisfy the requirements due to lack of many efficient special means. In this talk, aim at such applications, the principal faultiness are designated for traditional cloud systems first. Then we try to improve in request latency, storage throughout capacity, container expansion speed, and fault-tolerance, to satisfy sprinting peak load service requirements. The system we developed has been applied in many real sprinting peak load scenarios.

Minyi Guo received the BSc and ME degrees in computer science from Nanjing University, China; and the PhD degree in computer science from the University of Tsukuba, Japan. He is currently a Chair professor of Shanghai Jiao Tong University (SJTU), China. Before joined SJTU, Dr. Guo had been a professor of the school of computer science and engineering, University of Aizu, Japan. Dr. Guo received the national science fund for distinguished young scholars from NSFC in 2007, and was supported by “Recruitment program of Global Experts” in 2010. His present research interests include parallel/distributed computing, compiler optimizations, big data and cloud computing. He has more than 400 publications in major journals and international conferences in these areas. He received 7 best/highlight paper awards from international conferences including ALSPOS 2017 and ICCD 2018. He is now Editor-in-Chief of IEEE Transactions on Sustainable Computing, and on the editorial board of IEEE Transactions on Parallel and Distributed Systems, IEEE Transactions on Cloud Computing and Journal of Parallel and Distributed Computing. Dr. Guo is a fellow of IEEE, a fellow of CCF, and a distinguished member of ACM.



Keynote Speakers - 2021


Dr. Edward W. Tunstel
Jr. Past President
  IEEE Systems, Man, and Cybernetics Society
Associate Director, Robotics
  Raytheon Technologies Research Center, USA
Research Interests
Human-Robot Collaboration, Mobile Robot Navigation, Soft Computing for Autonomous Systems and Control, Robotic Systems Engineering
Toward Head-Up, Hands-Off Interaction with Human-Collaborative Robots

Robotics is playing an increasing role in many domains, and advanced robots will eventually be applied in a variety of ways as partners to a range of users including factory workers, logistics personnel, and first responders to name a few. End-users and stakeholders are calling for developments that enable teaming of humans with robots. Robots operating in unstructured environments currently require tremendous individual oversight on the part of human operators/partners, calling for high cognitive workload and reduced alertness for one or more humans. Human operators are often constrained to head-down, hands-on human-robot interaction (HRI) that must be customized for each unique robot command and control system. More natural and intuitive human-robot interfaces are needed to deal with this problem. In many applications the most effective collaboration may leverage natural interaction, whether via spoken language or non-verbal communication in the forms of gestures and brain-machine interfaces. This talk discusses such technologies and interaction approaches requiring advancement and further attention toward enabling smart human-collaborative robots that are responsive to multiple modes of high-level, intuitive human interaction. Also discussed is the need to increase robotic intelligence and leverage cognitive architectures to reach the capacity to interpret and execute high-level commands and acquire skills via concepts of memetics. The talk offers a sense for existing technologies and additional research needs that will advance HRI toward head-up, hands off paradigms fostering more efficient human-collaborative operation.

Edward Tunstel is with the Autonomous & Intelligent Systems Department at Raytheon Technologies Research Center, USA, where he provides leadership, expertise, and associated strategy development and leads a research group that studies, develops, and transitions robotics technologies enabling autonomy and human-collaborative capabilities for manufacturing and service applications. He joined in 2017 after 10 years at Johns Hopkins Applied Physics Laboratory as a senior roboticist in its research department and Intelligent Systems Center, and as space robotics & autonomous control lead in its space department. At APL he was engaged in modular open systems development efforts supporting advanced EOD robotic systems programs as well as robotics and autonomy research for future national security and space applications. Prior to APL he was with NASA JPL for 18 years, where he was a senior robotics engineer and group leader of its Advanced Robotic Controls Group. He worked on the NASA Mars Exploration Rovers mission as both a flight systems engineer responsible for autonomous navigation and associated V&V, and as rover engineering team lead for rover mobility and robotic arm subsystems during surface mission operations on Mars. He earned B.S. and M.E. degrees in mechanical engineering from Howard University in Washington, DC and the Ph.D. in electrical engineering from the University of New Mexico. He is an IEEE Fellow with over 160 technical publications including five co-edited/authored books in his research interest areas.



Prof. Byoung-Tak Zhang
POSCO Chair Professor
   Seoul National University, Korea
Director
   Cognitive Robotics and Artificial Intelligence Center (CRAIC), Korea
Research Interests
Artificial Intelligence, Machine Learning, Cognitive Brain Science
Embodied Learning: Making AI Truly Autonomous and Self-improving

Deep learning has changed the paradigm of AI from rule-based "manual" programming to data-driven "automatic" programming. However, most deep learning methods require some external system that provides them with data, making their scalability limited. Here we argue that the learner can feed itself the data autonomously if it is embodied, i.e. equipped with sensors and actuators. With the perception-action cycle the embodied AI can continuously learn to solve problems in a self-supervised way by evaluating and correcting its own predictions like the human brain does. In this talk, I will show some of our studies in the BabyMind project that pursues recursively self-improving, autonomous cognitive systems, like human babies, via embodied learning, and discuss its implications on achieving truly human-level general AI.

Byoung-Tak Zhang is POSCO Chair Professor of Computer Science and Engineering and Director of the AI Institute, Seoul National University (SNU). He has served as President of the Korean Society for Artificial Intelligence (2010-2013) and of the Korean Society for Cognitive Science (2016-2017). He received his PhD (Dr. rer. nat.) in computer science from University of Bonn, Germany in 1992 and his BS and MS in computer science and engineering from Seoul National University, Korea in 1986 and 1988, respectively. Before joining Seoul National University in 1997, he has worked as Research Fellow at the German National Research Center for Information Technology (GMD, now Fraunhofer Institutes) in Sankt Augustin/Bonn for 1992-1995. He has been Visiting Professor at MIT CSAIL and Brain and Cognitive Sciences Department, Cambridge, MA, for 2003-2004, Samsung Advanced Institute of Technology (SAIT) for 2007-2008, BMBF Excellence Centers of Cognitive Technical Systems (CoTeSys, Munich) and Cognitive Interaction Technology (CITEC, Bielefeld) for the Winter of 2010-2011, and Princeton Neuroscience Institute (PNI) for 2013-2014. He currently serves as Associate Editor of Journal of Cognitive Science, Applied Intelligence, BioSystems, and the IEEE Transactions on Evolutionary Computation (1997-2010). He has received numerous awards and honors, including the Red Stripes Order of Service Merit of the Korean Government, INAK Award, Minister of Science and Technology Award, Okawa Research Grant Award, Distinguished Service Award from the IEEE Computational Intelligence Society, and Academic Excellence Award from the Korea Information Science Society.



Dr. Suntae Jung
Chief Strategy Officer
   Strategic Planning Team, Surromind, Korea
Research Interests
Technology convergence and business models of artificial intelligence, Digital health
Deep Learning For Broad And Practical Use In Existing Industries

Artificial intelligence is not only the future, but also a practical technology that solves a variety of business problems in existing industries such as manufacturing, logistics, distribution, finance, healthcare, etc. It is becoming an essential skill to improve the quality, efficiency and speed of existing businesses. In particular, AI replaces human daily work, speeds up work and helps humans make decisions quickly and accurately. In the recent COVID19 era, digital transformation is accelerating with the expansion and enhancement of AI. AI experts and infrastructure are required to develop artificial intelligence. In order to apply AI to a wider and more practical application, development platform that are easier to use are needed. New development platforms are emerging based on GUI (Graphic user interface), integrating all functions from data processing to AI modeling and distribution, and performance monitoring. The main part of the development process is done automatically. Even people who are not familiar with AI are interested in AI, and if there is data, they can develop AI and used it for their own work. The explainable AI that can explain optimal modeling and analysis results is also being developed as a new trend of AI development platform. In this talk, I will introduce the trends of the development platform that can be used more widely in the existing business.

Suntae Jung is a research and business development professional. He had finished his Ph.D of material science and engineering in Seoul National University at 1995, following M.S. and B.S. in the same university at 1988 and 1990. Between 1995 and 1996 he had been a researcher of Beckman institute in University of Illinois at Urbana Champaign. Since joining Samsung Electronics in 1996, he has mainly focused on the development of innovative technologies such as optical devices, user interfaces, intelligent applications, and digital health. He led the development of commercial technologies and products for mobile and smart device applications. As core technologies for mobile interaction and service application, he had led developing sensing, artificial intelligence and recognition technologies. He is creative and active, holding 92 patents and more than 40 papers. He was honored by National IR52 (Industrial Research) in 2009 and 2012. He joined AI company Surromind as Chief Strategy Officer in 2020. He is currently working on technology convergence and business models of artificial intelligence, a key technology for digital transformation.



Prof. Tony Q.S. Quek
Cheng Tsang Man Chair Professor
   Singapore University of Technology and Design (SUTD), Singapore
Deputy Director
   SUTD-ZJU IDEA, Singapore
ISTD Pillar Head, Sector Lead
   SUTD AI Program, Singapore
Research Interests
Wireless communications and networks, network intelligence, big data processing, URLLC, IoT, wireless security
AI: A Networking and Communication Perspective

Recent breakthroughs in artificial intelligence and machine learning, including deep neural networks, the availability of powerful computing platforms and big data are providing us with technologies to perform tasks that once seemed impossible. In the near future, autonomous vehicles and drones, intelligent mobile networks, and intelligent internet-of-things (IoT) will become a norm. At the heart of this technological revolution, it is clear that we will need to have artificial intelligence over a massively scalable, ultra-high capacity, ultra-low latency, and dynamic new network infrastructure. In this talk, we will provide an overview of applying AI in networking and communications setting and provide some interesting applications. In addition, we will also share some of our preliminary works in this area.

Tony Q.S. Quek received the B.E. and M.E. degrees in Electrical and Electronics Engineering from Tokyo Institute of Technology, Tokyo, Japan, respectively. At Massachusetts Institute of Technology (MIT), Cambridge, MA, he earned the Ph.D. in Electrical Engineering and Computer Science. Currently, he is the Cheng Tsang Man Chair Professor with Singapore University of Technology and Design (SUTD). He also serves as the Head of ISTD Pillar, Sector Lead for SUTD AI Program, and the Deputy Director of SUTD-ZJU IDEA. His current research topics include wireless communications and networking, big data processing, network intelligence, URLLC, and IoT. Dr. Quek has been actively involved in organizing and chairing sessions and has served as a TPC member in numerous international conferences. He is currently serving as an Editor for the IEEE Transactions on Wireless Communications, the Chair of IEEE VTS Technical Committee on Deep Learning for Wireless Communications as well as an elected member of the IEEE Signal Processing Society SPCOM Technical Committee. He was an Executive Editorial Committee Member of the IEEE Transactions on Wireless Communications, an Editor of the IEEE Transactions on Communications, and an Editor of the IEEE Wireless Communications Letters. He is a co-author of a few books published by Cambridge University Press. Dr. Quek received the 2008 Philip Yeo Prize for Outstanding Achievement in Research, the 2012 IEEE William R. Bennett Prize, the 2016 IEEE Signal Processing Society Young Author Best Paper Award, the 2017 CTTC Early Achievement Award, the 2017 IEEE ComSoc AP Outstanding Paper Award, the 2020 IEEE Communications Society Young Author Best Paper Award, the 2020 IEEE Stephen O. Rice Prize, the 2020 Nokia Visiting Professor, and the 2016-2020 Clarivate Analytics Highly Cited Researcher. He is a Distinguished Lecturer of the IEEE Communications Society and a Fellow of IEEE.



Prof. Haruo Yokota
Dean
   School of Computing, Tokyo Institute of Technology, Japan
Professor
   School of Computing, Tokyo Institute of Technology, Japan
Research Interests
Data Engineering, Database Technologies, Dependable Storage Systems, Secure Data Access, Intelligent Contents Retrieval
Information Technologies for the Secondary Use of Electronic Medical Records

Our daily lives have been greatly impacted by information technology which has become one of the most important infrastructure. As an example, the information technology has introduced significant changes in the medical field, such as medical image recognition, medical sensor data processing, computational drug design, and electronic medical record (EMR) systems. Focusing on the EMR systems, they do not only reduce the cost of managing medical treatment histories, but also can improve medical processes by the secondary use of these records. To expedite the secondary use, Japanese government has started a project to collect the EMR from a large number of hospitals in Japan. The clinical pathway service is a good instance of the secondary use of the EMR. Medical workers including doctors, nurses, and technicians generally use clinical pathways as their guidelines for typical sequences of medical treatments. The clinical pathways have been traditionally created by the medical workers themselves based on their experiences with great effort. The candidates of the clinical pathways can be extracted by applying the sequential pattern mining techniques to medical orders in the EMR. It is helpful for the medical workers to verify the correctness of existing clinical pathways or modify them by comparing the extracted frequent sequential patterns. To provide proper patterns as the useful information to the medical workers, there are a number of technical issues to be considered. At first, consideration on time intervals between the medical treatments is essential. Moreover, there are a number of branches in the frequent sequential patterns extracted from the EMR. Visualization of these branches is important to choose appropriate patterns. The issues of cost, safety, and reasoning related to these branches should also be considered.

Haruo Yokota is Dean, School of Computing in Tokyo Institute of Technology, Japan. He has been a Full Professor at the Department of Computer Science of Tokyo Institute of Technology since 2001. He was an Associate Professor at Japan Advanced Institute of Science and Technology (JAIST) between 1986 and 1992. He was a researcher at Fujitsu Laboratories Ltd. from 1986 to 1992 and at ICOT for the 5th Generation Computer Project from 1982 to 1986. He received his B.E., M.E., and Dr.Eng. Degrees from Tokyo Institute of Technology in 1980, 1982, and 1991, respectively. His research interests include the general research areas of data engineering, database technologies, dependable storage systems, secure data access, and intelligent contents retrieval. He was a vice president of DBSJ, a chair of ACM SIGMOD Japan Chapter, a trustee board member of IPSJ, the Editor-in-Chief of Journal of Information Processing, and an associate editor of the VLDB Journal. He is currently a board member of DBSJ, a fellow of IEICE and IPSJ, a senior member of IEEE, and a member of IFIP-WG10.4, JSAI, ACM, and ACM-SIGMOD.

2020


Prof. Kostas J. Kyriakopoulos
Director
  Postgraduate Program on Automation Systems, National Technical University of Athens, Greece
Professor
  National Technical University of Athens, Greece
Research Interests
Control Systems & Robotics: Applications to Autonomous Systems (ground, underwater, aerial)
Cooperation in Autonomous Systems: Intelligent Control Design to compensate for Lean Communication

Our presentation is centered around the development of provable cooperation schemes for sensor-based motion planning and interaction control of autonomous systems. Our goal is to design sound interfaces of our provable techniques with higher-level machine-intelligence based decision making schemes with the purpose of minimizing explicit communication. We address a decentralized motion planning & control architecture for cooperative loading task using heterogeneous robots operating in a constraint workspace with static obstacles. The optimal loading configuration is selected considering the connectivity of the space and the distance between the robots. A motion control scheme for each agent is designed and implemented in order to autonomously guide each robot to the desired loading configuration with guaranteed obstacle avoidance and convergence properties. The performance of the proposed strategy is experimentally verified in a variety of loading scenarios. We continue with a novel distributed leader-follower architecture for Cooperative Manipulation of Multiple Underwater Vehicle Manipulator Systems (UVMS) under Lean Communication. The leading UVMS, which has knowledge of the desired trajectory, tries to achieve tracking behavior via an impedance control law, leading the overall formation towards the goal configuration while avoiding collisions with the obstacles. The following UVMSs estimate the object's desired trajectory via a novel prescribed performance estimation law and implement a similar impedance control law. No explicit data is exchanged online among the robots. Various simulations and experiments clarify the proposed method and verify its efficiency.



Prof. Shou-De Lin
Professor
  National Taiwan University, Taiwan
Research Interests
Machine Learning, Big Data Analytics, Knowledge Discovery and Data Mining, Natural Language Processing, Internet of Things Data Analysis, Social Network Data Analysis
Toward Better Understanding of Encoder-Decoder based Deep Neural Network Models

The Encoder-Decoder models based on Recurrent Neural Network (or Seq2Seq model in brief) or its variant such as Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM) cell has demonstrated decent success in many areas. However, it is still a mystery why a recurrent network with such simple structure can achieve various complex tasks. In this talk, I will first demonstrate the power of a Seq2Seq model in learning hidden connection between inputs and outputs, and then offer an in-depth analysis to the neural level to explain how and why it works. Finally I will describe a general strategy to perform in-depth analysis for deep recurrent neural networks.



Prof. Kwan Min Lee
Korea Foundation Professor
   Contemporary Korean Society and New Media, Nanyang Technological University (NTU), Singapore
Former Vice President
   Samsung Electronics, Korea
Research Interests
UX (User Experience) research and design; Social and psychological effects of ICT (Information and Communication Technologies); Human-machine interaction.
Human Evolution and UX (User Experience) Innovation

In this talk, I will explain how human evolution has shaped the way we respond to and interact with media and information technologies in certain ways. After that, I will explain how our understanding of media psychology can be effectively applied to the design of successful ICT (information and communication technologies) innovation. I propose that in order to design innovative ICT products and services, we need to understand and develop the following four things--1. Evolutionary responses to technology form factors; 2. Sensor technologies to understand users; 3. Technologies manifesting machine's wills and desires; 4. Social interfaces. Examples come from both my academic and industry works spanning from evolutionary psychology to current directions in smart media design by Samsung and other ICT companies.



Prof. Seungjin Choi
President
  KIISE AI Society, Korea
Chief Technology Officer
  BARO, Korea
Former Professor
  Pohang University of Science and Technology, Korea
Research Interests
Bayesian Optimization, Deep Meta Learning, Deep Reinforcement Learning, Random Projection, Matrix Factorization
Learning to Learn for Few-shot Problems and Warm-starting

Deep learning has achieved great success in various tasks, when it is trained with large amount of labeled data. However, it is a challenging task to train deep models using only a handful of labeled examples. Few-shot learning, the goal of which is to learn from only a few examples in each class label, has been a popular subject in machine learning and computer vision communities. Learning to learn, which is also known as meta-learning, has emerged again as a promising approach to tackle few-shot problems. In this talk, I introduce recent advances in meta-learning, the underlying idea of which is to leverage past experience to learn a prior over tasks, so that it can quickly adapt to a novel task. I begin with a few metric-based deep learning methods that have been developed to solve a few-shot learning task. I give an information-theoretic unified view of existing metric learning methods and present a method for learning discrete latent representation shared across relevant tasks to enable a model to adapt to new tasks quickly. Experiments show that DIMCO requires less memory (i.e., code length) for performance similar to previous methods and that our method is particularly effective when the training dataset is small. I also illustrate gradient-based meta-learning, such as model-agnostic meta-learning (MAML), emphasizing my recent work on MT nets that learn layer-wise subspace and metric from a set of tasks. Finally I also introduce a method to warm-start Bayesian optimization which is a critical technique for AutoML, experimental design, or neural architecture search.



Prof. Minyi Guo
Head
   Department of Computer Science and Engineering
Director
   Embedded and Pervasive Computing Center
Professor
   Shanghai Jiao Tong University, China
Research Interests
Parallel and Distributed Processing; Parallelizing Compilers; Cloud Computing; Pervasive Computing; Software Engineering, Embedded Systems; Green Computing; Wireless Sensor Networks.
Towards the new Architecture for Urban Big data processing Systems

Nowadays, sensing technologies and large-scale computing infrastructures have produced a variety of big data in urban spaces, e.g. human mobility, air quality, traffic patterns, and geographical data. The big data implies rich knowledge about a city and can help tackle these challenges when used correctly. That is, holistic urban big data plays the key role in smart city constructions. However, processing urban big data needs the specific computing engine different with traditional one such as Hadoop and Spark, because the sensing and knowledge representation are more complicated than domain-specific big data. In this talk, we will give some properties for processing urban big data and introduce a new platform for processing and analyzing urban big data. Then we discuss how the collaborative computing bridges the data and computation in the cyber space and the environment, systems, people and things in the physical world.

2019

Prof. Frank Biocca
Chair
  Department of Informatics, Ying Wu College of Computing
  New Jersey Institute of Technology, USA
Professor
  New Jersey Institute of Technology, USA



Towards Mobile Experience of Virtual and Augmented Reality

Virtual and augmented reality systems are evolving so that phone-based and increasingly immersive experiences are enabled in mobile settings. In this talk we examine hardware and interface design issues in mobile augmented reality and how they affect user experience of larger virtual environments.



Prof. Yasushi Kiyoki
Former President
  Database Society of Japan
Professor
  Keio University, Japan



A SPA-based Semantic Computing System for Natural and Social Environment-Analysis and Visualization with "Multi-Dimensional World-Map"

Humankind, the dominant species on Earth, faces the most essential and indispensable mission; we must endeavor on a global scale to perpetually restore and improve our natural and social environments. One of the essential computations in environmental study is context-dependent semantic-computing to analyze the changes of various situations in a context dependent way with a large amount of environmental information resources.
It is also significant to memorize those situations and compute environment change in various aspects and contexts, in order to discover what are happening in the nature of our planet. We have proposed a multi-dimensional computing model, the Mathematical Model of Meaning (MMM) in 1994. As a global environmental system based on MMM, we have realized "5-Dimensional World Map System" for integrating and analyzing environmental phenomena in ocean and land. We introduce the concept of "SPA (Sensing, Processing and Analytical Actuation Functions" for realizing a global environmental system, to apply it to 5-Dimensional World Map System. This concept is effective and advantageous to design environmental systems with Physical-Cyber integration to detect environmental phenomena as real data resources in a physical-space (real space), map them to cyber-space to make analytical and semantic computing, and actuate the analytically computed results to the real space with visualization for expressing environmental phenomena, causalities and influences. The 5D World Map System is globally utilized as a Global Environmental Semantic Computing System, in SDG14, United-Nations-ESCAP: (https://sdghelpdesk.unescap.org/toolboxes ).


Prof. Chidchanok Lursinsa
Professor
  Chulalongkorn University, Thailand



Concept of Discard-after-learn and Multi-stratum Neural Networks with Application

One of the challenging problems in training a neural network is how to train the network so that the number of epochs is approximately a linear time in terms of number of training patterns and the percent of testing accuracy is still in the acceptable range. In this talk, we will discuss the concept of discard-after-learn for training a multi-stratum neural network to achieve this bound under the constraints of data overflow, streaming, time -space complexity preservation, and other issues such as plasticity of the network. An application to the problem in cyber security will also be demonstrated.


Prof. Minh-Triet Tran
Vice Rector
   University of Science, VNU-HCM
Professor
   University of Science, VNU-HCM



Analysing Daily Activity Logs for Smart Ubiquitous Environment

Collecting and analyzing daily activity logs can provide potential insights for better understanding and possible optimization for individual and organizational activities and operations. Lifelog data can be in various media format. It may include audio data recorded during conversations, photos or video clips captured by wearable or regular personal cameras, or even biometric data, such as heart rate or calorie burn. Visual lifelog data is one of the most essential sources for personal diary generation as it has rich potential information and it is easy to be collected. It is an increasing demand to process and analyze daily activity logs, mostly in visual format, to develop useful services and utilities for smart environments. In this talk, we introduce several modalities to analyze and interact with lifelog data to develop potential applications for smart environments. The proposed systems are based on practical social needs and aim to provide people with natural experience with smart services and utilities in a ubiquitous environment.

  • People can access augmented data and services by recognizing the current context and retrieving similar known cases.
  • Lost items can be found or memories can be retrieved or verified by searching daily logs.
  • Reminiscence can help people to positively revive past memories and connections with their relatives.
  • Regular events and anomalies can be detected from surveillance systems for appropriate actions.
  • Image captioning to automatically generate (text) daily diary from visual data.
We also discuss privacy and security issues in collecting and analyzing daily activity logs.

2018


Prof. Angel P. del Pobil
Director
  Robotic Intelligence Laboratory, Jaume I University
Professor
  Engineering and Computer Science Deparment
Jaume I University, Spain



Speech Title: Robots as Cyberphysical Systems: The Challenges Ahead

An intelligent robot is a perfect paradigm of a cyber-physical system (CPS), since its very nature is based on the seamless integration of computational algorithms and physical components, including embedded sensors, processors and actuators in order to sense and interact with the physical world. In my speech I will address some of the challenges for robots considered as CPS, such as adaptability, autonomy, functionality, resiliency, and safety, with emphasis on the physical interaction with the environment. As test cases I will consider robots as personal assistants, along with robots in online shopping warehouses, as an example towards the 4th industrial revolution, the so-called Industry 4.0, with some lessons learned from our recent participation in the Amazon Robotics Challenge 2017 that took place in Nagoya in July 2017. I will also discuss some implications in terms of the interactions of information processing, communication and control of physical processes, with especial emphasis on the difficulties that dealing with open-ended physical entities can bring.



Prof. James Won-Ki Hong
Dean
  Graduate School for Information Technology, POSTECH, Korea
Professor
  Dept. of Computer Science and Engineering
POSTECH, Korea


Speech Title: Towards Carrier Network Virtualization and Applications

Network Virtualization is the technology to enable the creation and management of logical networks on the top of shared underlying physical network. The network virtualization technology has the potentials to significantly reduce both the capital expenditures (CAPEX) and operating expenses (OPEX) of network by supporting multi-tenancy, flexibility, programmability, scalability, and agility. At the same time, Software Defined Networking (SDN) and Network Function Virtualization (NFV) have achieved remarkable advances by changing the networking paradigm during the last decade. The collaboration of SDN, NFV and network virtualization technologies can brings various potential benefits and opportunities for carrier-grade networks, not only data center networks. However, most of network virtualization solutions are deployed in data center networks for cloud platforms to provide connectivity between virtual machines. The way to achieve network virtualization for carrier-grade networks is still far away in terms of reliability, manageability, resiliency, performance, and security. Moreover, network virtualization of carrier-grade networks can be used to realize service slicing, one of the 5G network visions.



Prof. Feng Xia
Assistant Dean
  School of Software
Dalian University of Technology, China
Head
  Department of Cyber Engineering
Dalian University of Technology, China
Professor
  Dalian University of Technology, China


Speech Title: Data Science in Science

As data (especially big data) become the new oil, data science has recently attracted intensive and growing attention from industry, government, and academia. Data science focuses on deriving valuable knowledge from (raw) data in an efficient and intelligent manner, with the purpose of prediction, exploration, understanding, and/or intervention. It encompasses the set of methods and tools that enable data-driven activities in business, government, and scientific research. In particular, data science is playing an increasingly important role in scientific research. One evidence is the so-called fourth paradigm of science, which features data-intensive scientific discovery. Another is the emergence of scholarly big data. Recent years have witnessed the exponential growth of scholarly data in all scientific disciplines. The rapid rise of big scholarly data brings about new issues and challenges with respect to e.g. information retrieval, data management and analysis. Data science in science that exploits scholarly big data enables us to better understand the nature of science, giving rise to a lot of potentials on addressing the challenges. This talk will look into recent advances in this field, and discuss relevant opportunities and challenges.



Prof. Shahrul Azman Mohd Noah
Professor
  Universiti Kebangsaan Malaysia, Malaysia


Speech Title: Ontology and What It Has to Do with Information Retrieval

The discipline of philosophy define ontology as the science of 'what is', the kinds of structures of objects, properties, events, processes and relations in every area of reality. From the perspective of computing, an ontology is an engineering artefact which is constituted by a specific vocabulary used to describe a certain reality, plus a set of explicit assumptions regarding the intended meaning of the vocabulary. Thus, an ontology describes a formal specification of a certain domain, which is a formally shared understanding of a domain of interest that are machine manipulable. Ontologies have been applied in many knowledge-based applications such as decision support systems, expert systems and question-answering systems. Information retrieval (IR) is finding material (usually documents) of an unstructured nature (usually text) that satisfies an information need from within large collections (usually stored on computers). In contrast to ontology-based systems, IR systems rely on the bag-of-word representation approach to retrieve documents. As such, the contextual and semantic meaning of terms as they appear in the documents are lost. Various efforts have been put forward in order to improve the retrieval performance of existing IR systems, such as: feature-based models, term dependence models and entity-based models. In this talk, we describe the use of ontology to enhance the retrieval performance of IR systems. We will first review various applications of ontology to support or enhance semantic IR. We will then show some of the research that we have or currently embarked on ontology-based information retrieval, namely: semantic digital library; crime news retrieval and multimodal ontology retrieval. We then conclude the talk with challenges and future research work in the area.

2017


Prof. Hiroyuki Kitagawa
Former President
  Database Society of Japan
Professor
  Center for Computational Sciences
University of Tsukuba, Japan



Real World Big Data Integration and Analysis: Research Issues and Challenges

Big data technologies have been bringing a huge impact on every aspect of human activities and human society and transforming the world. They are serving as a major driving force which will lead us to the next generation society and industry. In recent several years, many research and development projects have been launched to advance big data technologies and apply them to real societies around the globe. In Japan, the Ministry of Education, Science, Culture, Sports, Science and Technology (MEXT) started "Research and Development on Real World Big Data Integration and Analysis" two years ago, and we, a research team formed by researches from four major Japanese universities, are collaborating and actively working towards its goals. This talk will give an overview of the project including its objectives and goals, main research activities, and the research outcomes obtained in the past two years. Big Data is often characterized by several V's such as Volume, Variety, Velocity, Veracity and Value. This talk will especially highlight our research efforts to address Variety and Velocity issues, since utilization and integration of real-time social streaming data is one of key research issues in the project. I will elaborate on them such as our event-oriented stream processing system, stream OLAP analytics, unified big data processing framework integrating streaming and batch processing, and meta data inference techniques for big streaming data integration.



Prof. Jin Woo Kim
Professor
  School of Business
Yonsei University, Korea


Digital Companions: A combination of HCI and AI for Life-Companionship

In the past, we had been surrounded by many human companions who share activities with us, who involve social relations with others, and who knows us well. Companions include but not limited to husband and wife, sisters and brothers, and good old friends. However, these human companions have become less available recently due to numerous cultural and economic reasons. The lack of life companions leads to serious social problems, such as depression and suicide. In order to cover up the lack of human companions, Digital Companions have been suggested such as Pepper and Jibo. In this talk, I would like to present a conceptual model of digital companion that includes pre-requisites and core components of ideal companions. The conceptual model is composed of seven core technologies, which combine HCI and AI. This talk will explain the seven technologies and present video clips from SF movies that clearly exhibit future direction of digital life companion.



Prof. Dong-Hee Shin
Distinguished Scholar
  Ministry of Education (National Research Foundation), Korea
Professor
  School of Media and Communication
Chung-Ang University, Seoul, Korea


A User-based Model of Quality of Experience for the Internet of Things

The exponential growth of services via the Internet of Things (IoT) is making it increasingly important to cater to the quality expectations of end users. Quality of experience (QoE) can become the guiding paradigm for managing quality provisioning and application design in the IoT. This study examines the relationship between consumer experience and quality perception of IoT and develops a conceptual model for QoE in personal informatics. Using an ethnographic observation, it first characterizes quality of service (QoS) and subjective evaluation to compare QoS with QoE. It then performs a user survey to identify user behavior factors in personal informatics. It finally proposes a user experience model, conceptualizing QoE specific to personal informatics and highlighting its relationships with other factors. The model establishes a foundation for IoT service categories through a heuristic quality assessment tool from a user-centered perspective. The results overall provide the groundwork for developing future IoT services with QoE requirements, as well as for dimensioning the underlying network provisioning infrastructures, particularly with regard to wearable technologies.

2016
Prof. Jim Jansen Professor
  College of Information Sciences and Technology
Director
  Information Searching and Learning Laboratory
  The Pennsylvania State University, USA
Principal Scientist
Qatar Computing Research Institute, Qatar
   
Prof. Hongbin Zha Professor
  Department of Machine Intelligence
Director
  Key Lab of Machine Perception (MOE)
  Peking University, China

The Transformed Role of the Viewer: Second Screens and the Social Soundtrack

The nearly ubiquitous use of mobile devices integrated with easy interface with social media platforms facilitates a unique social interaction about broadcast media and other events that alters the role of the viewer from a passive to an active function, with the visitor engaged in information sharing, consumption, and dissemination often in real time. This technology affordance for online conversation about an event is referred to as the second screen phenomenon, although there may be multiple (i.e., more than two) screens involved. The resulting online conversation from second screen interaction about an event is referred to as the social soundtrack. The social soundtrack is an interesting conversational form of information sharing, information interaction, and information diffusion. This keynote will introduce the theoretical constructs and empirical measures of social soundtrack and second screen research, along with application of these constructs and measures in current investigations involving millions of posts on multiple social media platforms. Research concerning social soundtrack and secondary screens is important in identifying the influence and affordances that technology has on social media conversations from an information sharing. Research findings can also shed light on social communication in relationship to the cultural impact of broadcast media events, the social interaction in cross technology usage for second screens, and the effect of second screen technologies on pop culture and human information processing.

   

3D Reconstruction for Object Modeling and Scene Analysis

3D reconstruction is an important field in computer vision, and results accumulated in the field have found wide applications in virtual reality, creative media design, and robotics. But nevertheless, we still face great challenges when we try to use the techniques in modeling both objects with complex structures or large-scale scenes. The major difficulties come from several constraints in traditional approaches, including ambiguity and uncertainty inherent in the reconstruction algorithms, limitation on viewpoint movements, occlusion of objects, and low-resolutions of available 3D data. In the talk, I will introduce some newly developed methods aiming to solve the problems by making good use of imaging geometry principles and fusion of data from different sensors. Main topics include: reconstruction from silhouettes from a camera system with two planar mirrors; depth image super-resolution based on similarity-aware patchwork assembly; urban scene description by analysis of 3D data collected from car-mounted sensors. I also will report results from an application of such 3D digitization techniques in heritage documentation, mainly for grotto objects and scenes.



2015
Dr. Jihie Kim Vice President
Software R&D Center
Samsung Electronics, Korea
   
Prof. Mary Beth Rosson Professor and Interim Dean
College of Information Sciences and Technology
Pennsylvania State University, USA

Intelligence in Education

Social software such as online forums, Wikis, and social networking sites, plays an important role in various fields, including science, politics, and education. Our goal is to analyze social activities within online communication and collaboration environments, and develop computational tools that support and promote effective interactions and participation. This talk present our work on online discussion modeling and intelligent tools for assisting discussion participants. We first analyze how messages and individual discussants contribute to Q&A discussions. We present a model for capturing information seeking or information providing roles of messages, such as question, answer or acknowledgement. We also identify user intent in the discussion as an information seeker or a provider. We show how the role information can be combined with linguistic and temporal features for developing a predictive model of discussant performance. We also demonstrate how such role information can be used for promoting interactions among potential peer collaborators.
In the latter part of the presentation, we show how such analyses can be a powerful tool for dialogue mediators and participants. In particular, we present a computational workflow (big data) framework that enables efficient and robust integration and analyses of diverse datasets. The analysis results are used for assisting discussion mediators or facilitating just-in-time adaptation to discussants' needs, such as identifying unresolved issues or help seekers who need more assistance.

   

The iSchool Vision for Interdisciplinary University Research and Education

The emergence of iSchools has been much discussed, with respect to an interdisciplinary vision for both research and education of undergraduate and graduate students. The Pennsylvania State University was one the first iSchools, launched in 1998 to meet the needs for workforce development of students who have the skills of information technology but also to research topics in real world interdisciplinary computing. In this talk, I will give a brief history of how and why this new realm of academic pursuits has emerged, illustrated throughout with examples drawn from education and research activities at Penn State and other iSchools. Reflecting on the past 15 years, I will also point to a set of continuing challenges and opportunities for interdisciplinary study that is founded on the integration of the information sciences, an increasingly ubiquitous technological substrate, and the broad and ambiguous implications of human individuals and organizations situated in real world activities.



2014
Prof. Ben Lee Professor
School of Electrical Engineering
   and Computer Science
Oregon State University, USA
   
Prof. Hamid R. Arabnia Professor
Department of Computer Science
University of Georgia, USA

Wireless HD Video Transmission Technology: Challenges and Future Applications

Wireless High Definition Video Transmission (WHDVT) over 802.11-based networks is an important enabling technology for home networks, viewing videos on the move, and N-screen environments. However, significant challenges exist in delivering smooth playback of HD content as WHDVT becomes more pervasive and multiple streams will need to be supported on the same network. These include lossy and delay prone nature of wireless media, unequal importance of video packets, and user mobility. This talk first introduces the basic concepts of WHDVT, which include characteristics of 802.11 networks, H.264 video compression, and video streaming protocols. Then, several solutions at the various layers will be presented, which include application, RTP/UDP and RTP/TCP, MAC, and physical layers. Finally, the talk will conclude with open research issues and future directions.
   

Bio-Inspired Supercomputing and Big Data

In order to convert data to knowledge, it is necessary to search (+process) data sets that are on the order of zettabytes in size (Big Data). Conventional computers (uniprocessor systems) are unable to process Big Data in a timely manner. Inherent limitations on the computational power of sequential uniprocessor systems have lead to the development of parallel multiprocessor systems. The two major issues in the formulation and design of parallel multiprocessor systems are algorithm design and architecture design. The parallel multiprocessor systems should be so designed so as to facilitate the design and implementation of the efficient parallel algorithms that exploit optimally the capabilities of the system. From an architectural point of view, the system should have low hardware complexity, be capable of being built of components that can be easily replicated, should exhibit desirable cost-performance characteristics, be cost effective and exhibit good scalability in terms of hardware complexity and cost with increasing problem size. In distributed memory multiprocessor systems, the processing elements can be considered to be nodes that are connected together via an interconnection network... The design presented in this talk is bio-inspired.

2013
Prof. Hitoshi Aida Professor
Department of Electrical Engineering and
  Information Systems, School of Engineering
The University of Tokyo, Japan
Chairman
Committee for Information, Computer
  and Communications Policy in
  Organisation for Economic Co-operation
  and Development (OECD)
   
Prof. Tei-Wei Kuo Distinguished Professor
Department of Computer Science
  and Information Engineering
National Taiwan University, Taiwan
Executive Director
Intelligent and Ubiquitous Computing
  Thematic Center of the Research Center
  of the IT Innovation, Academia Sinica, Taiwan
Board Director
Genesys Logic, Taiwan
Chairman
Embedded Systems Group of the National
  Networked Communication Program Office
Taiwan

Renewable Energy Powered, Disaster-Resilient Wireless Network Infrastructure

Because of rapid increase of smart phones, mobile phone operators are desperately trying to offload mobile phone traffic to femto cells, WiFi hotspots or WiMAX coverage. On the other hand in Japan, because many base stations stopped operation due to long commercial power failure or broken fiber trunk after Great East Japan Earthquake, people began thinking resilience of network infrastructure seriously. Attaching large batteries or powering mobile phone base station by renewable energy, however, is not usually practical because of the size and weight of the equipments. In this talk, we investigate about the feasibility of WiFi-based wireless network infrastructure powered by renewable energy, which is connected by fiber trunk and is used to offload mobile phone traffic in ordinary times and act as a wireless-relayed mesh network after disaster.
   

The Positioning of Non-Volatile Memory in Embedded System Designs

In recent years, non-volatile memory has shown its great potentials in serving as a layer in the memory hierarchy, such as flash memory for the secondary storage of mobile devices. Their inherent characteristics also point out new directions in system designs and grand challenges. In this talk, we will first have a brief introduction to the non-volatile memory, especially flash memory and phase change memory. We will then present challenges and solutions for flash memory as a storage medium. The talk is concluded by key challenges for system designs of phase change memory.
2012
Prof. Sajal K. Das University Distinguished Scholar Professor
  Department of Computer Science
  and Engineering
Director
  Center for Research in Wireless
  Mobility and Networking (CReWMaN)
The University of Texas at Arlington, USA
   
Prof. Abdullah Mohd Zin Professor
Faculty of Information Science and Technology
Universiti Kebangsaan Malaysia

Cyber-Physical and Networked Sensor Systems: Challenges and Opportunities

Rapid advancements in embedded systems, sensors and wireless communication technologies have led to the development of cyber-physical systems, pervasive computing and smart environments with important applications such as smart grids, sustainability, health care and security. Wireless sensor networks play significant role in building such systems as they can effectively act as the human-physical interface with the digital world through sensing, communication, computing and control or actuation. However,the inherent characteristics of wireless sensor networks, typified by resource constraints, high degree of uncertainty, heterogeneity and distributed control pose significant challenges in ubiquitous information management. After introducing the basic challenges, opportunities and applications, this talk will present a novel framework for multi-modal context recognition from sensor streaming data, context-aware data fusion, and situation-aware decision making with a trade-off between information accuracy (inference quality) and energy consumption. The underlying approach is based on dynamic Bayesian and probabilistic models, machine learning, information theoretic reasoning, and game theory. The talk will be concluded with open research issues and future directions.
   

Beyond Ubiquitos Computing: The HoneyBee Ensemble Computing Environment

Since the 1980s, computing environment has moved from a centralized environment into a distributed computing environment. The distributed computing environment has also moves from one phase to another. In the 1980s, this distributed environment was provided in the form of a client-server computing, followed by the Internet computing in the 1990s. The wide availability of mobile devices together with wireless network has changed the computing environment into mobile computing and later into pervasive or ubiquitos computing environment. In 2008, European Union Interlink WG1 task group has proposed that the next wave of computing environment should be the ensemble computing in order to answer four major research challenges in the current computing environment. These challenges are (i) massive number of nodes in a system, (ii) open environment, (iii) non-deterministic environment, and (iv) adaptation. In an ensemble computing environment, computing devices can communicate and work together to complete a certain task based on peer-to-peer protocol and supporting services. The advantages of this environment can be summarized as follows: ad hoc interaction, fluidity, transience and scalability. There are two models of ensemble computing: a swarm of bats or a bee-hive. In this paper we will describe our proposed model of an ensemble environment known as the HoneyBee environment. The discussion in this paper is be divided into four main issues. The first issue is about the ensemble computing in general followed by a discussion on the formal model of HoneyBee environment. Some possible applications (two issues) within the HoneyBee environment will be described next. The fourth issue is concerning Agent Oriented Programming, which is considered to be the most suitable software development approach for this type of computing environment.
2011
Prof. S. Shyam Sundar Distinguished Professor of Communications
Co-Director
Media Effects Research Lab
The Pennsylvania State University, USA
   
Prof. Ding-Zhu Du Professor
Department of Computer Science
University of Texas at Dallas, USA

Living Interactively and Socializing Ubiquitously

This keynote talk will address the psychology of living in a ubiquitous computing environment, by focusing on how new technological affordances enable individuals to express agency and build community in an ongoing manner. The recent proliferation of location-based information tools and the popularity of communication technologies that encourage social interaction have contributed to a computationally intensive environment, with users constantly managing information for themselves as well as sharing information with others at unprecedented levels. We constantly straddle real and virtual worlds without making the distinction between the real and the virtual. We have come to expect high-fidelity, context-aware systems that serve to blur the boundary between the two. As a result, rules of interaction management are undergoing dramatic changes, with consequences for design of future systems and interfaces.
   

Next Generation Network, Wireless Network and Topology Control with Small Routing Cost

One of important components in the potential next generation network is the wireless network. Topology control is one vital factor to a wireless network efficiency. Since wireless network has no physical infrastructure, it may lead to a severe problem, known as broadcast storm problem caused by flooding inherent in on-demand routing schemes. Inspired by physical backbone in classical wired networks, the virtual backbone has been proposed and studied extensively in the literature for wireless networks to reduce the damage caused by flooding and to maximize resource utilization. However, when we employ the virtual backbone, two problems may be introduced. The first one is the increasing of routing cost. The second one is that the road load on some links may increase, which may cause traffic jam. How do we solve those problems. In this talk, we will introduce recent research work on their solutions.


 
  Home | Conference | Call for Papers  
  Phone: +82-31-299-4620
Email: imcom.skku@gmail.com