Evaluation Strategies for HCI Toolkit Research
- Submitting institution
-
The University of Lancaster
- Unit of assessment
- 11 - Computer Science and Informatics
- Output identifier
- 220855851
- Type
- E - Conference contribution
- DOI
-
10.1145/3173574.3173610
- Title of conference / published proceedings
- CHI '18 Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems
- First page
- 1
- Volume
- -
- Issue
- -
- ISSN
- -
- Open access status
- -
- Month of publication
- April
- Year of publication
- 2018
- URL
-
-
- Supplementary information
-
-
- Request cross-referral to
- -
- Output has been delayed by COVID-19
- No
- COVID-19 affected output statement
- -
- Forensic science
- No
- Criminology
- No
- Interdisciplinary
- No
- Number of additional authors
-
5
- Research group(s)
-
E - Interactive Systems
- Citation count
- 14
- Proposed double-weighted
- No
- Reserve for an output with double weighting
- No
- Additional information
- This paper emerged from a years-long international collaboration in which we established a new evaluation framework to assess the impact and effectiveness of technical HCI research toolkits/systems. This work is published in the ACM CHI proceedings - the top publication venue for HCI research - and has been presented in public talks to industry (e.g., Microsoft Research or Adobe), invited talks at academic institutions (e.g., University of Toronto or Delft University) and in various workshops and seminars (e.g. HCITools). Although only published mid 2018, the paper has been included in several teaching curricula, and has been cited in HCI books.
- Author contribution statement
- -
- Non-English
- No
- English abstract
- -