DSN 2023: Workshops Call for Contributions


The workshops at DSN'2023 will provide a forum for groups of researchers to discuss topics in dependability-related research and practice. DSN workshops are on a quite diverse range of topics. Their purpose is to serve as incubators for scientific communities that form and share a particular research agenda. They also provide opportunities for researchers to exchange and discuss scientific ideas at an early stage, before they have matured to warrant conference or journal publication.

* All dates refer to AoE time (Anywhere on Earth)

Workshop on Data-Centric Dependability and Security (DCDS)

Website: http://dcds.lasige.di.fc.ul.pt

Description

The workshop aims at providing researchers with a forum to exchange and discuss scientific contributions and open challenges, both theoretical and practical, related to the use of data-centric approaches that promote the dependability and cybersecurity of computing systems. We want to foster joint work and knowledge exchange between the dependability and security communities, and researchers and practitioners from areas such as machine and statistical learning, and data science and visualization. The workshop provides a forum for discussing novel trends in data-centric processing technologies and the role of such technologies in the development of resilient systems. It aims to discuss novel approaches for processing and analyzing data generated by the systems as well as information gathered from open sources, leveraging from data science, machine and statistical learning techniques, and visualization. The workshop shall contribute to identify new application areas as well as open and future research problems, for data-centric approaches to system dependability and security.

Important dates

Workshop on Safety and Security in Intelligent Vehicles (SSIV)

Website: https://sites.google.com/view/ssiv

Description

This will be the eighth edition of the SSIV workshop, aimed at continuing the success of previous editions. The vast range of open challenges to achieve Safety and Security in Intelligent Vehicles (with or without connection with the Internet) is a fundamental reason that justifies the numerous research initiatives and wide discussion on these matters, which we are currently observing everywhere.

The successful pairing of man and machine, represented by robotics solutions that augment humans, has the potential to make our workforce safer and more productive and provide a non-conventional way of transportation. Therefore, the workshop will keep its focus on exploring the challenges and interdependencies between security, real-time, safety and certification, which emerge when introducing networked, autonomous and cooperative functionalities.

Important dates

Workshop on Dependable and Secure Machine Learning (DSML)

Website: https://dependablesecureml.github.io/index.html

Description

Machine learning (ML) is increasingly used in critical domains such as health and wellness, criminal sentencing recommendations, commerce, transportation, human capital management, entertainment, and communication. The design of ML systems has mainly focused on developing models, algorithms, and datasets on which they are trained to demonstrate high accuracy for specific tasks such as object recognition and classification. Machine learning algorithms typically construct a model by training on a labeled training dataset and their performance is assessed based on the accuracy in predicting labels for unseen (but often similar) testing data. This is based on the assumption that the training dataset is representative of the inputs that the system will face in deployment. However, in practice there are a wide variety of unexpected accidental, as well as adversarially-crafted, perturbations on the ML inputs that might lead to violations of this assumption. ML algorithms are also often over-confident about their predictions when processing such unexpected inputs. This makes it difficult to deploy them in safety critical settings where one needs to be able to rely on the ML predictions to make decisions or revert back to a failsafe mode. Further, ML algorithms are often executed on special-purpose hardware accelerators, which may themselves be subject to faults. Thus, there is a growing concern regarding the reliability, safety, security, and accountability of machine learning systems.

The DSN Workshop on Dependable and Secure Machine Learning (DSML) is an open forum for researchers, practitioners, and regulatory experts, to present and discuss innovative ideas and practical techniques and tools for producing dependable and secure ML systems. A major goal of the workshop is to draw the attention of the research community to the problem of establishing guarantees of reliability, security, safety, and robustness for systems that incorporate increasingly complex ML models, and to the challenge of determining whether such systems can comply with requirements for safety-critical systems. A further goal is to build a research community at the intersection of machine learning and dependable and secure computing.

Important dates

Workshop on Approximate Computing (AxC)

Website: https://www.iti.uni-stuttgart.de/en/chairs/ca/axc23/

Description

Nowadays, Approximate Computing (AxC) represents a novel design paradigm for building modern systems, which offer trade-offs between efficiency in terms of performance, power consumption, hardware area, execution timing, and the quality/exactness of the outcomes. AxC is based on the intuitive observation that, while performing exact computation requires a high number of computational resources, allowing a selective approximation or an occasional relaxation of the specification may provide significant gains in energy efficiency while still providing acceptable results. Moreover, while the hidden cost of AxC reduces an application’s inherent resiliency to errors, AxC has also recently been demonstrated to be effective in safety-critical applications.

The DSN Workshop on Approximate Computing explores the AxC continuum, allowing new methodologies to exploit the approximation in many recent application domains, such as machine learning, safety, and security, in an effective, dependable, and secure manner. There is a lack of methodologies and automated tools for the entire design and manufacturing flow, while professionals are already working on real systems deploying AxC solutions. Moreover, existing and new techniques are moving across all system layers, focusing not only on the hardware layer anymore but including cross-layer approaches, broadening the potential community of interested researchers.

The workshop is open for researchers and professionals to present and discuss novel ideas and techniques for approximation across all layers of the system stack. This edition puts an extra focus on cross-layer approximate computing and approximation techniques for open architectures, with particular emphasis on the EU microprocessor plan.

Important dates

Workshop on the Verification & Validation of Dependable Cyber-Physical Systems (VERDI)

Website: https://verdi-workshop.github.io/2023/

Description

The fast increase and availability of communication bandwidth and computational power, as well as emerging computing paradigms such as Cloud Computing, Edge Computing, and Deep Learning, are pushing forward Cyber-Physical Systems (CPS) research and development and establishing them as promising engineering solutions to address challenges arising in areas as diverse as aerospace, automotive, energy, disaster response, health care, smart farming, manufacturing, city management, among others.

A key property that CPS are expected to exhibit is that of dependability. A key ingredient to ensure dependability is to successfully apply verification & validation (V&V) techniques and attest the desired levels of safety, security, and privacy. This is a challenging task that comes with significant time and cost implications for all the organizations involved in the build-up and evaluation of CPS. This challenge becomes even more critical with the incorporation of more and more Artificial Intelligence models into the operational capabilities of CPS for handling tasks that are increasingly complex.

The VERDI workshop aims at serving as a discussion forum focused on the area of V&V as a means to guarantee dependability of complex, potentially automated/autonomous CPS.

Important dates