This repository provides supplementary material for the paper "Aggregating Labels from Humans and AIs with Asymmetric Performance," currently under peer review.
We provide a Docker container for easy reproduction.
FYI: You can use jupyter lab on http://localhost:8008/ when running this container (Please check the token following command docker exec -it collapse jupyter server list).
$ docker compose up -d
$ docker exec -it collapse bash
$ cd main_experiment
$ rm -r results
$ rm -r results_cbcc
$ rm -r results_human
$ rm -r results_human_cbcc
$ mkdir results results_cbcc results_human results_human_cbcc
$ python exp.pyNote: It will take 2-3 weeks until all the experiments are completed (because it requires over 30,000 runs).
CBCC cannot be run in a non-Windows environment, so please run exp_cbcc.py on a Windows PC. We used Python 3.11.3 with the libraries listed in requirements_python3_win.txt.
However, data containing only human worker results (with num_ai=0) cannot be generated using this method. Please run notebooks/human_only_results.ipynb (in the container) and notebooks/human_only_results_cbcc.ipynb (in the Windows venv).
For reproducibility, we provide the code used to process and generate human and AI responses in the preprocessing folder.
If you want to reproduce the experiment from the data generation process, you can regenerate the data using the following command.
$ docker compose up -d
$ docker exec -it collapse bash
$ cd main_experiment/preprocessing
$ python generate_human_responses.py
$ python generate_ai_responses.pyThe preprocessing/raw_datasets directory contains the raw datasets before redundancy adjustment.
These data were copied from the following publicly available data sources (excluding Tiny).
Dog: https://github.com/zhydhkcws/crowd_truth_infer/tree/master/datasets/s4_Dog%20dataFace: https://github.com/zhydhkcws/crowd_truth_infer/tree/master/datasets/s4_Face%20Sentiment%20IdentificationTiny: We firstly published online.Adult: It was available at https://toloka.ai/datasets/ asToloka Aggregation Features, but is no longer distributed (, but you can get it via the Wayback Machine).
We obtained a total of 38,125 lines of experimental results and provide a visualization tool to analyze them.
$ docker compose up -d
$ docker exec -it collapse bash
$ cd main_experiment/streamlit
$ streamlit run app.py --server.port 9999Please visit http://localhost:9009/ to use this app.
You can regenerate a part of figures in Figure 5, 8, and 9 by this app.
We provide a notebook that allows you to re-run the case studies and analysis performed in our paper.
- Confusion Matrices (Figure 6) :
notebooks\cm_analysis.ipynb - Analysis of Convergence (Section 5.1.3) :
notebooks\analysis_convergence.ipynb - Communities of CBCC (Figure 7):
notebooks\CBCC_analysis.ipynb
Our experimental results can be found in results.
The additional_methods folder contains implementations of various aggregation methods, copied from the following repositories with minimal modifications.
| Method | Link |
|---|---|
| CATD, LFC, Minmax, PM-CRH, ZC | https://github.com/zhydhkcws/crowd_truth_infer |
| LA | https://github.com/yyang318/LA_onepass |
The human data is human_responses_with_gt.csv
The AI's response data is stored in ai_responses.
If you need to regenerate the AI's response data, please run the following command.
$ docker compose up -d
$ docker exec -it collapse bash
$ cd additinal_experiment
$ python generate_ai_responses.pyEach method uses a different environment, notebook, and script. You will need to properly configure the file paths to match your execution environment.
Run notebooks\evaluate_crowdkit.ipynb in the container.
Run notebooks\evaluate_CBCC.ipynb on Windows computer.
We used Python 3.11.3 with the libraries listed in requirements_python3_win.txt.
- Run
notebooks\transform_to_truth_infer_format.ipynbin the container. - Set up a Python 2.7.13 execution environment on Windows PC and activate the venv.
- Install the libraries listed in
requirements_python27_win.txt(in the project root). - Run
scripts/py27_win.batin thescriptsfolder. - Deactivate the python2 venv.
- Run
scripts/py3_win.batin thescriptsfolder in the python3 windows venv for CBCC. - For running Minmax, you have to use MATLAB (paid) or MATLAB online (free).
- Run
additinal_methods/l_minimax-s/prepare.musing MATLAB withtruth_infer_0.csv, andtruth_infer_5.csvandtruth_infer_10.csv. - Using
notebooks/evaluate_truth_infer.ipynb, calculate the scores of each run in the container.
Run notebooks\evaluate_bds_hsds.ipynb in the container.
Run notebooks\summarize_results.ipynb in the container.
The methods folder contains code for BDS, HS-DS, and CBCC in Crowd-Kit format.
Please read prior_distributions.md in the project root directory for information on prior distributions.
Some of the code uses Crowd-Kit code under license. We would like to express our gratitude to the Crowd-Kit team. Additionally, we have made minimal modifications to the original CBCC code by the authors and included it in this repository. We would also like to express our gratitude to the authors of the CBCC code.