Visualization Meets AI 2025


Goals

The task of data visualization generally involves a design step, which requires the knowledge of the data domain and visualization methods to do well. Because of the immense design space for optimization, it can take both novices and experts substantial effort to derive desired visualization results from data for exploration or communication. Following the resurgence of artificial intelligence (AI) technology in recent years, in the field of visualization, there is a growing interest and opportunity in applying AI to perform data transformation and to assist the generation of visualization, aiming to strike a balance between cost and quality. The use of visualization to enhance AI is the other active line of research. This workshop, held in conjunction with IEEE PacificVis 2025, aims at exploring this emerging area of research and practice by fostering communication between visualization researchers and practitioners. Attendees will be introduced to the latest and greatest research innovations in AI-enhanced visualization (AI4VIS) as well as visualization-enhanced AI (VIS4AI), and also learn about further research opportunities. The workshop will be composed of full-paper presentations, short-paper presentations, and invited talks.

Call for Participation

Submission

We welcome contributions as full papers and short papers. All accepted papers will appear in the proceedings of the IEEE PacificVis 2025 and the IEEE Xplore Digital Library.

Papers should follow the formatting guidelines for VGTC Conference Style Papers. There is no strict page limit, but authors are encouraged to submit a paper whose length matches its contribution. Our recommendation for the paper length is as follows:

- Full paper: up to 10 pages (including reference)

- Short paper: up to 6 pages (including reference)

Both full papers and short papers are to be submitted using PCS (Track Name: PacificVis 2025 Visualization Meets AI Workshop). We will accept both single-blind (not anonymized) as well as double-blind (anonymized) submissions. In the case of double-blind submissions, please substitute the author names with the paper ID number.

Important Dates (for both full papers and short papers)

- Paper due: December 27, 2024

- 1st cycle notification: January 31, 2025

- Revision due: February 14, 2025

- 2nd cycle notification: February 24, 2025

- Camera ready paper due: March 3, 2025

- Workshop: April 22, 2025

All deadlines are due at 11:59pm (23:59) Anywhere on Earth (AoE).


Topics of Interest

We encourage submissions of high quality research and application papers incorporating visualization and AI/machine lerning. Our interest includes both topics of AI4VIS and VIS4AI.

Example papers:

AI4VIS
P.-P. Vázquez. Are LLMs ready for Visualization? In Proc. PacificVis, pp. 343-352, 2024.
J. Han and C. Wang. VCNet: A Generative Model for Volume Completion. Visual Informatics, 6(2): 62-73, 2022.
L. Giovannangeli, R. Bourqui, R. Giot, and D. Auber. Toward Automatic Comparison of Visualization Techniques: Application to Graph Visualization. Visual Informatics, 4(2): 86-98, 2020.
J. Shen, R. Wang, and H.-W. Shen. Visual Exploration of Latent Space for Traditional Chinese Music. Visual Informatics, 4(2): 99-108, 2020.

VIS4AI
Z. Liang; G. Li; R. Gu; Y. Wang; G. Shan. SampleViz: Concept based Sampling for Policy Refinement in Deep Reinforcement Learning. In Proc. PacificVis, pp. 359-368, 2024.
M. Gleicher, X. Yu, and Y. Chen. Trinary Tools for Continuously Valued Binary Classifiers. Visual Informatics, 6(2): 74-86, 2022.
X. Ji, Y. Tu, W. He, J. Wang, H.-W. Shen, and P.-Y. Yen. USEVis: Visual Analytics of Attention-Based Neural Embedding in Information Retrieval. Visual Informatics, 5(2): 1-12, 2021.
M. Wang, J. Wenskovitch, L. House, N. Polys, and C. North. Bridging Cognitive Gaps between User and Model in Interactive Dimension Reduction. Visual Informatics, 5(2): 13-25, 2021.


People

Workshop Chair

Takanori Fujiwara, Linköping University

Junpeng Wang, Visa Research


Program Committee

Coming soon.

CONTACT

pvis_ai4vis@pvis.org