From now on we will focus on the problem of assigning papers to
reviewers.
We assume that each reviewer is given access to the
-list of papers to be reviewed, and gives each paper both a ``desirability''
+list of papers to be reviewed, and gives each paper both a ``preference''
score indicating his/her level of interest in reviewing the paper and an
``expertise'' score indicating how qualified he/she is to evaluate the paper.
-(Some organizations may choose to use a single set of scores for both
-desirability and expertise. We believe that making this distinction may better
+(Some organizations may use a single preference score and assume that it
+also indicates expertise. We believe that making the distinction may better
model the real-world objective.)
A reviewer may also declare a conflict of interest with a particular paper,
meaning that he/she is forbidden to review the paper.
For each reviewer $i$ and paper $j$, there is a unit-capacity edge from $i$
to $j$ allowing that pair to be assigned, unless the reviewer declared a
conflict of interest, in which case the edge is not present. The edge cost is
-based on the desirability value $d_{ij}$ stated by reviewer $i$ for paper
+based on the preference value $a_{ij}$ stated by reviewer $i$ for paper
$j$. For values on the NSF scale of 1 (best) to 40 (worst), we chose the cost
-function $(10 + d_{ij})^2$, in an attempt to provide an incentive to avoid
+function $(10 + a_{ij})^2$, in an attempt to provide an incentive to avoid
really bad matched pairs without completely masking the difference between a
good matched pair and an excellent one. This choice seeks only to achieve a
natural relationship between a linear preference scale as normally interpreted
and the costs to be used in the optimization. We realize that strategic
-reviewers will take the cost function into account in choosing what desirability
+reviewers will take the cost function into account in choosing what preference
values to submit, in which case its form matters little.
Alongside these purely additive per-review costs,
we want to avoid an individual reviewer
getting too many papers he/she does not like.
With respect to a reviewer $i$, we classify papers as ``interesting'',
-``boring'', or ``very boring'' based on their desirability values;
+``boring'', or ``very boring'' based on their preference values;
the thresholds for these classes are currently
the same for all reviewers. The edge for reviewer $i$ and paper
$j$ leaves from $r^1_i$ if $j$ is interesting, $r^2_i$ if $j$ is boring, or
Reviewer 2 is expert on paper 1, with reviewers 1 and 3 merely knowledgeable.
(Reviewer edges for paper 2 are not shown.)
This illustrates how, in principle,
-the desirability and expertise relations might differ.
+the preference and expertise relations might differ.
Each is taken into account at a different stage of the construction.
The cost of a flow (assignment) is the sum of its reviewer overload costs,
\section{Getting the Tool}
A distribution containing the source code for the matching tool as well as this
-document may be browsed or downloaded at (NOT YET):
+document may be browsed or downloaded at:
\[\hbox{\url{https://mattmccutchen.net/match/}}\]
There are currently two branches:
\begin{itemize}
\item \code{master} has the tool as originally designed for NSF, with no
-distinction between desirability and expertise.
+distinction between preference and expertise.
\item \code{popl2012} is the basis of the version used for POPL 2012. The main
-differences are that it has separate desirability and expertise, support for
+differences are that it has separate preference and expertise, support for
``fixing'' previously chosen reviewer-paper pairs (buggy, however),
and the special ERC gadget.
\end{itemize}