As intelligent systems increasingly blend into our everyday life, artificial social intelligence becomes a prominent area of research. Human language offers a unique unconstrained approach to probe through questions and reason through answers about social situations. This unconstrained approach extends previous attempts to model social intelligence through numeric supervision (e.g. sentiment and emotions labels). This is the cornerstone of the Social-IQ dataset. The dataset contains rigorously annotated and validated videos, questions and answers, as well as annotations for the complexity level of each question and answer. Social-IQ brings novel challenges to the field of artificial intelligence which sparks future research in social intelligence modeling, visual reasoning, and multimodal question answering.
You can download the Social-IQ from our github page. We will periodically update the data and add new descriptors under the CMU-Multimodal SDK format. You can download the evaluation scripts as well.
You can participate in the Social-IQ challenge(s). The private test set will be used to evaluate the results.
Leaderboard will keep track of the top performing models.
Join Our Mailing List
Submit to Leaderboard
Submit to the leaderboard of the Social-IQ. The submission process is easy and currently open. Your results will be added to the leaderboard. To acquire the private test data, please contact Amir Zadeh (email@example.com).
Watch the CVPR Talk
Watch the Social-IQ presentation at CVPR 2019. We discuss the overall statistics of the datasets as well as baseline performances. You can also have a look at our poster.
[06/15/2019] We are releasing a public test set for Social-IQ 1.1 on July 15th with 100 publicly available test data.
[06/15/2019] Watch the CVPR 2019 talk.
[06/15/2019] Github is now live!
[06/15/2019] Computational sequences are available for download through CMU Multimodal SDK.
[06/15/2019] Social-IQ Version 1.0 released.