OS Survey F98


Submission Criteria

For our OSSurveyF98 conference, you, as researchers, must submit your paper electronically to the program chair (me). You must make use MIME to send me a PDF or PostScript version of your file. If you use PostScript, it must be portable PostScript -- Unix programs that generate PostScript (e.g., dvips with LaTeX, or FrameMaker's output) tend to be okay; with Windows, you should verify your printer driver properties have the "maximize portablility" selection, and make sure that it will print on any printer. Typically, if I'm forced to use Windows, I install the Apple laserwriter driver and use "maximize portability", and that seems to work okay.

Program Committee

After you've submitted your papers, you will play the part of program committee members. This means reviewing the papers for the correctness / soundness of the claims or observations, possibly tracking down their references to verify statements, etc.

Review Format

For our OSSurveyF98 conference, you, as members of the program committee, should get the on-line submissions from this web page. The papers should be evaluated using the following method (this is taken from a real conference's program committee review instructions and revised slightly, since survey papers will not necessarily contain new research -- though if there are new research ideas, that's even better):
Papers will be evaluated in two parts: a set of numeric scores and a written commentary. Scores will be on a 1 to 7 scale (with 4 as an "average" in all cases 1 is bad, 7 is best). Do not assign a zero score. Also, please use integer values.

Scoring will be in four categories:

Import -- is the work (both its area and results) important to the OS community? Low scores might be given for evaluations of marketing literature and papers on inappropriate or dead topics. High scores are for papers that nearly all attendees will want to read and understand carefully.

Novelty -- are the observations novel? Low scores should be given for papers that re-hash obvious results or known observations about works in the topic area. High scores are for papers that point out new research areas (portions of design space not explored that ought to be), new fields, or demonstrate new ways to attack a problem.

Quality -- are the observations / criticisms sound? A low score might go to a paper whose observations are incorrect or whose critiques are biased or not well supported in your opinion. High scores are for papers with enough justification to convince you that the opinions are correct and viable.

Overall -- should we accept this paper or not? This is by far the most important number. It need not be an average of the other numbers, but it should reflect them. This number can also reflect issues in addition to those described above (e.g., poor presentation or lack of knowledge of related work).

Note that the conference evaluation contains criteria for novelty and importance to the OS community; when I grade these, these will be less important -- I will pay more attention to the quality of the reasoning and the soundness of the observations / criticisms, so having selected what's currently ``hot'' (or what's not) wouldn't be so important.

The review should be in the following (best if you cut-and-paste or used the review template). The reviews will be machine parsed to generate statistics.

Paper 99
-------- 8< --------  scores  -------- 8< --------  scores  --------
Import		Novelty		Quality		Overall
7		1		5		4
-------- 8< -------- comments -------- 8< -------- comments --------
Your comments on the paper.  This is public comments that the authors
of the papers will see.  Provide feedback to improve their paper, etc.

Submissions

  1. File Availability and Consistency for Mobile Systems by Bent, Elliott, Semanko, and Sherwood. [PDF]
  2. Multiprocessor Scheduling: A Survey by Huffaker, Peisert, Sievert, and Tune. [PDF]
  3. A Guide to Transparency in Five Research Metasystems by Chu, Eng, Munroe, and Smallen. [PDF]
  4. Coscheduling on Cluster Systems by Liu, Al-Shammari, Mohammad, and Le. [PDF]
  5. Issues in Multimedia Task Scheduling by Barroso, Manoli, and Petrapoulos. [PDF]
  6. Task Management Issues in Distributed Systems by Anantha, Sugimoto, Suryawan, and Tran. [PDF]
  7. Process Communications in Clusters by Al-Muhammadi, Petrov, Wang, and Warinschi. [PDF]
  8. Scheduling Real-Time Tasks in Distributed Systems by Gantman, Guo, Lewis, and Rashid. [PDF, orig PS]
  9. Practice and Technique in Extensible Operating Systems by Carver, Chen, and Reyes. [PDF]

Review Assignments

Sorted by Paper
PaperReviewersAnonymized comments
1 Chu, Le, Tran, Gantman, Huffaker, Mohammad, Chen, Munroe fb1.txt
2 Bent, Barroso, Al-Muhammadi, Guo, Al-Shammari, Suryawan, Carver fb2.txt
3 Elliott, Manoli, Petrov, Lewis, Peisert, Liu, Sugimoto, Warinschi fb3.txt
4 Semanko, Eng, Wang, Rashid, Sievert, Smallen, Petropoulos fb4.txt
5 Sherwood, Munroe, Warinschi, Carver, Tune, Anantha, Lewis fb5.txt
6 Huffaker, Petropoulos, Chen, Rashid, Sherwood, Eng, Wang, Guo fb6.txt
7 Peisert, Liu, Anantha, Reyes, Semanko, Chu, Manoli, Gantman fb7.txt
8 Sievert, Al-Shammari, Sugimoto, Elliott, Barroso, Petrov, Reyes fb8.txt
9 Tune, Mohammad, Suryawan, Tran, Bent, Le, Al-Muhammadi, Smallen fb9.txt
Sorted by Reviewer
Reviewer Papers
Al-Muhammadi 2 9
Al-Shammari 2 8
Anantha 5 7
Barroso 2 8
Bent 2 9
Carver 2 5
Chen 1 6
Chu 1 7
Elliott 3 8
Eng 4 6
Gantman 1 7
Guo 2 6
Huffaker 1 6
Le 1 9
Lewis 3 5
Liu 3 7
Manoli 3 7
Mohammad 1 9
Munroe 1 5
Peisert 3 7
Petropoulos 4 6
Petrov 3 8
Rashid 4 6
Reyes 7 8
Semanko 4 7
Sherwood 5 6
Sievert 4 8
Smallen 4 9
Sugimoto 3 8
Suryawan 2 9
Tran 1 9
Tune 5 9
Wang 4 6
Warinschi 3 5

You should write up your reviews for Tuesday (Nov 24). You should also email me the entire review. The authors will get the comments. I will sum up / average the numeric scores, and provide those for the authors.

Review Data

The averaged review scores are availble in a tabular that is machine generated from your reviews.

Conference Presentation

All the groups should prepare a presentation. On Dec 1, I'll announce the top two papers; their authors will actually make presentations. All of the groups (including those that makes the actual presentation) should print out their slides (on paper) or generate portable postscript/PDF and send that to me for grading/review. This should be done by the day of the presentation.

On Thursday (Dec 3), the groups with the top two papers will give an oral presentation in front of the entire class. I'll arrange to have an overhead projector available. Each group will have about 1/2 hour total, so you should plan on 20-25 minutes for the presentation and 5-10 minutes for questions and answers from the audience.

The Conference

For our OSSurveyF98 conference, you, as attendees, will also help evaluate the presentation of the papers. The top two papers will be presented by the authors. An overhead projector will be available for the oral presentation. The presentations should give an overview of the topic and results; its main purpose is to motivate the audience to read the full paper published in the proceedings. As attendees, this is your main evaluation criteria: do the presenters convey the key ideas clearly? does it make you want to look at the details in the paper? (Making you look because the presentation was confusing wouldn't count!)
[ search CSE | CSE home | bsy' home page | webster i/f | yahoo | hotbot | lycos | altavista | pgp key svr | spam | commerce ]
picture of bsy

bsy+www@cs.ucsd.edu, last updated Tue Nov 24 22:07:16 PST 1998.

email bsy


Don't make me hand over my privacy keys!