VMCAI 2023 - POPL 2023 (original) (raw)

Welcome to the website of the 24th International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI 2023).

VMCAI provides a forum for researchers from the communities of Verification, Model Checking, and Abstract Interpretation, facilitating interaction, cross-fertilization, and advancement of hybrid methods that combine these and related areas. VMCAI 2023 will be the 24th edition in the series.

VMCAI will take place during January 16-17, 2023.

Proceedings and Recordings

The full conference proceedings is available from Springer.

Recordings of both proceedings and invited talks are available from YouTube.

Highlights

Amazon Web ServicesPlatinum

CertoraBronze

The program is currently displayed in (GMT-05:00) Eastern Time (US & Canada).

**Use conference time zone: (GMT-05:00) Eastern Time (US & Canada)**Select other time zone

The GMT offsets shown reflect the offsets at the moment of the conference.

By setting a time band, the program will dim events that are outside this time window. This is useful for (virtual) conferences with a continuous program (with repeated sessions).
The time band will also limit the events that are included in the personal iCalendar subscription service.

Display full programSpecify a time band

-

You're viewing the program in a time zone which is different from your device's time zone change time zone

11:00 - 12:30 Neural networks and Abstract InterpretationVMCAI at Arlington Chair(s): Grigory Fedyukovich Florida State University
11:0015mTalk **Maximal Robust Neural Network Specifications via Oracle-guided Numerical OptimizationRecorded**VMCAIAnan Kabaha Technion, Israel Institute of Technology, Dana Drachsler Cohen Technion
11:1515mTalk **SMT-Based Modeling and Verification of Spiking Neural Networks: A Case StudyRecorded**VMCAISumana Ghosh , Soham Banerjee , Swarup Mohalik , Ansuman Banerjee
11:3030mTalk **A generic framework to coarse-grain stochastic reaction networks by Abstract Interpretation**VMCAIJerome Feret INRIA Paris, Albin Salazar INRIA/CNRS/ENS/PSL*
12:0030mTalk **Sound Symbolic Execution via Abstract Interpretation and its Application to Security**VMCAIXavier Rival Inria; ENS; CNRS; PSL University, Ignacio Tiraboschi Inria, France / ENS, France, Tamara Rezk INRIA

Accepted Papers

Title
A generic framework to coarse-grain stochastic reaction networks by Abstract InterpretationVMCAIJerome Feret, Albin Salazar
A Pragmatic Approach to Stateful Partial Order ReductionRecordedVMCAIBerk Cirisci, Constantin Enea, Azadeh Farzan, Suha Orhun Mutluergil
ARENA: Enhancing Abstract Refinement for Neural Network VerificationVMCAIYuyi Zhong, Quang Trung Ta, Siau-Cheng Khoo
Bayesian parameter estimation with guarantees via interval analysis and simulationVMCAILuisa Collodi
Compositional Verification of Stigmergic Collective SystemsVMCAILuca Di Stefano, Frederic Lang
CosySEL: Improving SAT Solving Using Local SymmetriesVMCAISabrine Saouli, Souheib Baarir, Claude Dutheillet, Jo Devriendt
Distributing and Parallelizing Non-canonical LoopsVMCAIClément Aubert, Thomas Rubiano, Neea Rusch, Thomas Seiller
Efficient Interprocedural Data-Flow Analysis using Treedepth and TreewidthVMCAIAmir Kafshdar Goharshady, Ahmed Khaled Zaher
Maximal Robust Neural Network Specifications via Oracle-guided Numerical OptimizationRecordedVMCAIAnan Kabaha, Dana Drachsler Cohen
Result Invalidation for Incremental Modular AnalysesVMCAIJens Van der Plas, Quentin Stiévenart, Coen De Roover
Satisfiability Modulo Custom Theories in Z3 (Tool Paper)VMCAINikolaj Bjørner, Clemens Eisenhofer, Laura Kovács
SMT-Based Modeling and Verification of Spiking Neural Networks: A Case StudyRecordedVMCAISumana Ghosh, Soham Banerjee, Swarup Mohalik, Ansuman Banerjee
Solving Constrained Horn Clauses over Algebraic Data TypesVMCAILucas Zavalia, Lidiia Chernigovskaia, Grigory Fedyukovich
Sound Symbolic Execution via Abstract Interpretation and its Application to SecurityVMCAIXavier Rival, Ignacio Tiraboschi, Tamara Rezk
StaticPersist : Compiler Support for PMEM ProgrammingRecordedVMCAISorav Bansal
Symbolic Abstract Heaps for Polymorphic Information-flow Guard InferenceVMCAINicolas Berthier, Narges Khakpour
Synthesizing History and Prophecy Variables for Symbolic Model CheckingVMCAICole Vick, Kenneth L. McMillan

Call for Papers

VMCAI 2023 is the 24th International Conference on Verification, Model Checking, and Abstract Interpretation. The conference will be held during January 15-17, 2023. VMCAI provides a forum for researchers from the communities of Verification, Model Checking, and Abstract Interpretation, facilitating interaction, cross-fertilization, and advancement of hybrid methods that combine these and related areas.

Scope

The program will consist of refereed research papers as well as invited lectures and tutorials. Research contributions can report new results as well as experimental evaluations and comparisons of existing techniques.

Topics include, but are not limited to:

Submissions can address any programming paradigm, including concurrent, constraint, functional, imperative, logic, and object-oriented programming.

Important Dates AoE (UTC-12)

September 8th, 2022 September 15, 2022: Paper submission

October 13th, 2022 October 20, 2022: Notification

November 10th, 2022: Camera-ready version due

Submissions

Submissions are required to follow Springer’s LNCS format. The page limit depends on the paper’s category (see below). In each category, additional material beyond the page limit may be placed in a clearly marked appendix, to be read at the discretion of the reviewers and to be omitted in the final version. Formatting style files and further guidelines for formatting can be found at the Springer website. Submission is via EasyChair.

Submissions will undergo a single-blind review process. Accepted papers will be published in Springer’s Lecture Notes in Computer Science series. There will be three categories of papers: regular papers, tool papers and case studies. Papers in each category have a different page limit and will be evaluated differently.

Regular papers clearly identify and justify an advance to the field of verification, abstract interpretation, or model checking. Where applicable, they are supported by experimental validation. Regular papers are restricted to 20 pages in LNCS format, not counting references.

Tool papers present a new tool, a new tool component, or novel extensions to an existing tool. They should provide a short description of the theoretical foundations with relevant citations, and emphasize the design and implementation concerns, including software architecture and core data structures. A regular tool paper should give a clear account of the tool’s functionality, discuss the tool’s practical capabilities with reference to the type and size of problems it can handle, describe experience with realistic case studies, and where applicable, provide a rigorous experimental evaluation. Papers that present extensions to existing tools should clearly focus on the improvements or extensions with respect to previously published versions of the tool, preferably substantiated by data on enhancements in terms of resources and capabilities. Authors are strongly encouraged to make their tools publicly available and submit an artifact. Tool papers are restricted to 12 pages in LNCS format, not counting references.

Case studies are expected to describe the use of verification, model checking, and abstract interpretation techniques in new application domains or industrial settings. Papers in this category do not necessarily need to present original research results but are expected to contain novel applications of formal methods techniques as well as an evaluation of these techniques in the chosen application domain. Such papers are encouraged to discuss the unique challenges of transferring research ideas to a real-world setting and reflect on any lessons learned from this technology transfer experience. Case study papers are restricted to 20 pages in LNCS format, not counting references. (Shorter case study papers are also welcome.)

Call for Artifacts

VMCAI 2023 makes available the option to submit an artifact along with a paper. Artifacts are any additional material that substantiates the claims made in the paper, and ideally makes them fully replicable. For some papers, these artifacts are as important as the paper itself because they provide crucial evidence for the quality of the results. The goal of artifact evaluation is twofold. On the one hand, we want to encourage authors to provide more substantial evidence to their papers and to reward authors who create artifacts. On the other hand, we want to simplify the independent replication of results presented in the paper and to ease future comparison with existing approaches. Artifacts of interest include (but are not limited to):

Artifact submission is optional. However, we highly encourage all authors to also submit an artifact. A successfully evaluated artifact can increase your chance of being accepted since the evaluation result of your artifact is taken into account during paper reviewing. Additionally, badges shown on the title page of the corresponding paper give you credit for good artifact submissions. We award one of three types of badges. For artifacts that are successfully evaluated by the artifact evaluation committee we grant the available badge. Artifacts that are publically available under a DOI receive an availability badge. Authors may use all granted badges on the title page of the respective paper.

Important Dates

The artifact evaluation will be done in parallel with the evaluation of the submitted paper. The artifacts submission deadline is 1 week after the paper submission.

September 9, 2022: Artifact submission opens

September 15, 2022 September 22, 2022: Artifact submission

October 3, 2022 October 16, 2022: Artifact test phase notification

October 4–7, 2022 October 16–20, 2022: Artifact clarification period

October 30, 2022: Artifact notification

All artifacts are evaluated by the artifact evaluation committee. Each artifact will be reviewed by at least two committee members. Reviewers will read the paper and explore the artifact to evaluate how well the artifact supports the claims and results of the paper. The evaluation is based on the following questions.

The artifact evaluation is performed in the following two phases.

Artifacts Submission

An artifact submission should consist of

The artifact evaluation chairs will download the archive file and distribute it to the reviewers. Please also look at the Artifact Packaging Guidelines section for detailed information about the contents of the submission. The abstract (as a txt or pdf) should be submitted to EasyChair. The abstract should include a description of the artifact, the URL of the download link (you should upload your vm to a hosting service of your choice - easychair will not accept vm uploads), as well as the SHA256 checksum of your archive file.

https://easychair.org/my/conference?conf=vmcai2023

We need the checksum to ensure the integrity of your artifact. You can generate the checksum using the following command-line tools.

If you cannot submit the artifact as requested or encounter any other difficulties in the submission process, please contact the artifact evaluation chairs prior to submission.

Artifact Packaging Guidelines

There are two acceptable ways to submit your artifact. You may either package the artifact as an archive file and write their instructions such that the artifact evaluation committee can evaluate the artifact within a virtual machine provided by us. In this case, only submit the required files to replicate your results in the provided virtual machine. If you submit in this way, you would not submit a virtual machine image in the archive file. AEC members will copy your archive file into the provided virtual machine.

The second option is to modify the given VM and reupload the VM to a hosting platform of your choice. We will check the hash of the image, then load the VM to test your artifact. In this case, a README should be contained in the home directory of the VM.

We recommend preparing your artifact in such a way that any computer science expert without dedicated expertise in your field can use your artifact, especially to replicate your results. For example, provide easy-to-use scripts and a detailed README document.

VMCAI 2022 Virtual Machine

An initial version of the virtual machine is available here. The user name is vmcai and the password is vmcai-2023. If you have any questions regarding the VM or in case you think the VM is improper for evaluation of your artifact, please contact the artifact evaluation chair.

Submission Contents

Your virtual machine must contain the following elements.

  1. The main artifact, i.e., data, software, libraries, scripts, etc. required to replicate the results of your paper. ◦ The review will be singly blind. Please make sure that you do not (accidentally) learn the identify of the reviewers (e.g., through analytics, logging).
  2. A license file. Your license needs to allow the artifact evaluation chairs to download and distribute the artifact to the artifact evaluation committee members and the artifact evaluation committee members must be allowed to evaluate the artifact, e.g., use, execute, and modify the artifact for the purpose of artifact evaluation.
  3. A README text file that introduces the artifact to the user and guides the user through replication of your results. Ideally, it should describe the structure and content of your artifact. It should also describe the steps to set up your artifact within the VM. To simplify the reviewing process, we recommend providing an installation script (if necessary). We would appreciate it if you would support the reviewers not only for the main review phase but also for the testing phase. To this end, it would be helpful if you would provide instructions that allow installation and rudimentary testing (i.e., in such a way that technical difficulties would pop up) in as little time as possible. Document in detail how to replicate your results of the paper:

Please document which claims or results of the paper can be replicated with the artifact and how (e.g., which experiment must be performed). Please also explain which claims and results cannot be replicated and why.

Publication of Artifacts

The artifact evaluation committee uses the submitted artifact only for the artifact evaluation. It may not publicize the artifact or any parts of it during or after completing evaluation. Artifacts and all associated data will be deleted at the end of the evaluation process. We encourage the authors of artifacts to make their artifacts also permanently available, e.g., on Zenodo or figshare, and refer to them in their papers via a DOI. All artifacts for which a DOI exists that is known to the artifact evaluation committee are granted the availability badge.