Center for Human Modeling and Simulation

Document Type

Conference Paper

Date of this Version

2011

Publication Source

Lecture Notes in Computer Science: Motion in Games

Volume

7060

Start Page

266

Last Page

277

DOI

10.1007/978-3-642-25090-3_23

Comments

4th International Conference, MIG 2011, Edinburgh, UK, November 13-15, 2011.

Abstract

The statistical analysis of multi-agent simulations requires a definitive set of benchmarks that represent the wide spectrum of challenging scenarios that agents encounter in dynamic environments, and a scoring method to objectively quantify the performance of a steering algorithm for a particular scenario. In this paper, we first recognize several limitations in prior evaluation methods. Next, we define a measure of normalized effort that penalizes deviation from desired speed, optimal paths, and collisions in a single metric. Finally, we propose a new set of benchmark categories that capture the different situations that agents encounter in dynamic environments and identify truly challenging scenarios for each category. We use our method to objectively evaluate and compare three state of the art steering approaches and one baseline reactive approach. Our proposed scoring mechanism can be used (a) to evaluate a single algorithm on a single scenario, (b) to compare the performance of an algorithm over different benchmarks, and (c) to compare different steering algorithms.

Copyright/Permission Statement

The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-25090-3_23

Share

COinS
 

Date Posted: 13 January 2016

This document has been peer reviewed.