View on GitHub

mmai2022

Workshop on the representation, sharing and evaluation of multimodal agent interaction.

Welcome to the 1st Workshop on the representation, sharing and evaluation of multimodal agent interaction.

This workshop is part of the fist International Conference on Hybrid Human-Artificial Intelligence, to be held at the Vrije Universiteit Amsterdam.

Human Agent Interaction

Program

TO BE ANNOUNCED

Motivation

Interaction is a real world event that takes place in time and physical or virtual space. By definition it only exists when it happens. This makes it difficult to observe and study interactions, to share interaction data, to replicate or reproduce them and to evaluate agent behavior in an objective way. Interactions are also extremely complex, covering many variables whose values change from case to case. The physical circumstances are different, the participants are different and past experiences have an impact on the actual event. Besides, the eye(s) of the camera(s) and/or experimenters are another factor with impact and the man-power needed to capture such data is high. Finally, privacy issues make it difficult to simply record and publish interaction data freely.

It is therefore not a surprise that interaction research progresses slowly. This workshop aims to bring together researchers with different research backgrounds to explore how interaction research can become more standardised and scalable. The goal of this workshop is to explore how researchers and developers can share experiments and data in which multimodal agent interaction plays a role and how these interactions can be compared and evaluated. Especially within real-world physical contexts, modeling and representing situations and contexts for effective interactions is a challenge. We therefore invite researchers and developers to share with us how and why you record multimodal interactions, whether your data can be shared or combined with other data, how systems can be trained and tested and how interaction can be replicated. Machine learning communities like vision and NLP have made a lot of fast progress by creating competitive leaderboards based on benchmark datasets. But although this is great for training unimodal perception models, obviously such datasets are not sufficient for research involving interaction where multiple modalities should be considered.

So what do we as interaction researchers need in order to achieve similar progress? What kinds of shared platforms and tools? What kinds of datasets are most useful? What about something along the lines of the “Alexa challenge” for dialogue systems, or “RoboCup@Home” for HRI, where groups of researchers start with the same platform and compete based on how well they can design interactions on specific tasks with real users? What would such a challenge look like? What kinds of tasks? Where do the users come from? How might simulation come into play, and how far can it get us?

In addition to this focus on the representation and evaluation of multimodal interaction, we will also run a panel discussion on privacy issues related to interaction data and the possibilities to mitigate privacy limitations for sharing.

Call for papers

We invite submissions of long and short papers focusing on advancement in multimodal datacollection for conversational AI. Papers can cover experimental, theoretical research but also tools, platforms and practical engineering challenges. We invite researchers and developers to share with us how and why they record multimodal interactions, whether their data can be shared or combined with other data, how systems can be trained and evaluated, and how results can be reproduced.

All papers must be original and not simultaneously submitted to another journal or conference. We invite work-in-progress submissions, blue-sky papers and demonstrations. The review will be blind (one-way anonymized review). Proceedings will be published through arXiv by each individual author and links to the papers will be hosted on the workshop website. Submitted papers should conform to the latest ACM LaTeX or Word publication format. Click here for LaTeX templates and examples (download the zip package entitled Primary Article Template - LaTeX).

The following paper categories are welcome:

The Call for Papers, including instructions for the submissions, can also be found on the EasyChair page.

Important dates

List of Topics

Program Committee

TBD

Organizing committee

Contact

All questions about submissions should be emailed to piek.vossen@vu.nl