The peer_score_rating() function collects score information from the YAML of a rating form within authors' repositories. It outputs a new .csv file, with rows specifying individual question scores for each student.

peer_score_rating(
  org,
  roster,
  form_rating,
  double_blind = TRUE,
  prefix = "",
  suffix = "",
  write_csv = TRUE
)

Arguments

org

Character. Name of the GitHub organization.

roster

Character. Data frame or file path of roster file with author-reviewer assignments. Must contain a column user with GitHub user names of authors, a column user_random with randomized tokens for user names, and one or more rev* columns that specify review assignments as values of the vector user_random.

form_rating

Character. File name of rating feedback form (must be .Rmd document).

double_blind

Logical. Specifies whether review is conducted double-blind (i.e. neither reviewer nor author can identify each other), or single-blind (i.e. authors remain anonymous but reviewer identities are revealed). If double_blind = TRUE, reviewer folders are identified by the anonymized user IDs in the roster's user_random column. If double_blind = FALSE, reviewer folders are identified by the original user names. Defaults to TRUE.

prefix

Character. Common repository name prefix.

suffix

Character. Common repository name suffix.

write_csv

Logical. Whether the roster data frame should be saved to a .csv file in the current working directory, defaults to TRUE.

See also

Examples

if (FALSE) { peer_score_rating( org = "ghclass-test", roster = "hw2_roster_seed12345.csv", form_rating = "hw2_rating.Rmd", double_blind = TRUE, prefix = prefix) }