From now on we will focus on the problem of assigning papers to
reviewers.
We assume that each reviewer is given access to the
From now on we will focus on the problem of assigning papers to
reviewers.
We assume that each reviewer is given access to the
score indicating his/her level of interest in reviewing the paper and an
``expertise'' score indicating how qualified he/she is to evaluate the paper.
score indicating his/her level of interest in reviewing the paper and an
``expertise'' score indicating how qualified he/she is to evaluate the paper.
-(Some organizations may choose to use a single set of scores for both
-desirability and expertise. We believe that making this distinction may better
+(Some organizations may use a single preference score and assume that it
+also indicates expertise. We believe that making the distinction may better
model the real-world objective.)
A reviewer may also declare a conflict of interest with a particular paper,
meaning that he/she is forbidden to review the paper.
model the real-world objective.)
A reviewer may also declare a conflict of interest with a particular paper,
meaning that he/she is forbidden to review the paper.
For each reviewer $i$ and paper $j$, there is a unit-capacity edge from $i$
to $j$ allowing that pair to be assigned, unless the reviewer declared a
conflict of interest, in which case the edge is not present. The edge cost is
For each reviewer $i$ and paper $j$, there is a unit-capacity edge from $i$
to $j$ allowing that pair to be assigned, unless the reviewer declared a
conflict of interest, in which case the edge is not present. The edge cost is
really bad matched pairs without completely masking the difference between a
good matched pair and an excellent one. This choice seeks only to achieve a
natural relationship between a linear preference scale as normally interpreted
and the costs to be used in the optimization. We realize that strategic
really bad matched pairs without completely masking the difference between a
good matched pair and an excellent one. This choice seeks only to achieve a
natural relationship between a linear preference scale as normally interpreted
and the costs to be used in the optimization. We realize that strategic
values to submit, in which case its form matters little.
Alongside these purely additive per-review costs,
we want to avoid an individual reviewer
getting too many papers he/she does not like.
With respect to a reviewer $i$, we classify papers as ``interesting'',
values to submit, in which case its form matters little.
Alongside these purely additive per-review costs,
we want to avoid an individual reviewer
getting too many papers he/she does not like.
With respect to a reviewer $i$, we classify papers as ``interesting'',
the thresholds for these classes are currently
the same for all reviewers. The edge for reviewer $i$ and paper
$j$ leaves from $r^1_i$ if $j$ is interesting, $r^2_i$ if $j$ is boring, or
the thresholds for these classes are currently
the same for all reviewers. The edge for reviewer $i$ and paper
$j$ leaves from $r^1_i$ if $j$ is interesting, $r^2_i$ if $j$ is boring, or
Reviewer 2 is expert on paper 1, with reviewers 1 and 3 merely knowledgeable.
(Reviewer edges for paper 2 are not shown.)
This illustrates how, in principle,
Reviewer 2 is expert on paper 1, with reviewers 1 and 3 merely knowledgeable.
(Reviewer edges for paper 2 are not shown.)
This illustrates how, in principle,
Each is taken into account at a different stage of the construction.
The cost of a flow (assignment) is the sum of its reviewer overload costs,
Each is taken into account at a different stage of the construction.
The cost of a flow (assignment) is the sum of its reviewer overload costs,
\section{Getting the Tool}
A distribution containing the source code for the matching tool as well as this
\section{Getting the Tool}
A distribution containing the source code for the matching tool as well as this
\[\hbox{\url{https://mattmccutchen.net/match/}}\]
There are currently two branches:
\begin{itemize}
\item \code{master} has the tool as originally designed for NSF, with no
\[\hbox{\url{https://mattmccutchen.net/match/}}\]
There are currently two branches:
\begin{itemize}
\item \code{master} has the tool as originally designed for NSF, with no
``fixing'' previously chosen reviewer-paper pairs (buggy, however),
and the special ERC gadget.
\end{itemize}
``fixing'' previously chosen reviewer-paper pairs (buggy, however),
and the special ERC gadget.
\end{itemize}