Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Architecture & Implementation Book for the Integration of MÜSLI into MaMpf

This book documents the architecture, user-facing features, and implementation plan for the integration of MÜSLI features into MaMpf. It is organized into three main parts:

  1. Core Architecture: Documents the backend services, data models, and workflows. This includes:

    • Registration workflows (campaigns, policies, allocation)
    • Preference-based assignment strategy
    • Post-allocation administration patterns
    • Assessments, grading, and eligibility computation
    • The end-to-end lifecycle from enrollment to grading
  2. User-Facing Applications: Describes the dashboards and interfaces for students and staff that consume the core architecture.

  3. Project Planning: Outlines the high-level implementation plan and migration strategy for integrating these new features into the existing platform.

Use the chapters in the sidebar for deep dives into each topic. This document serves as a quick orientation.

Overview

A high‑level map of the architecture proposed for the integration of MÜSLI into MaMpf. Each later layer depends only on stable persisted results from earlier phases (no hidden cross‑coupling).

flowchart LR
    A[Registration] --> B[Allocation];
    B --> C[Rosters];
    C --> D[Assessments & Grading];
    D --> E[Eligibility];
    E --> F[Exam Registration];
    F --> H[Exam Grading];
    H --> G[Reporting];

Core flow (see End-to-End Workflow):

  1. Campaign setup & user registrations (Registration System)
  2. Preference assignment (if needed) (Algorithm Details)
  3. Allocation materialization to domain rosters (Rosters)
  4. Ongoing roster administration (moves, late adds)
  5. Coursework assessments, submissions, points & grades (Assessments & Grading)
  6. Achievements & eligibility computation (exam gating) (Student Performance)
  7. Exam registration (policy gated)
  8. Exam assessment creation & grading (Grading Schemes)
  9. Dashboards for students & staff (Student Dashboard, Teacher & Editor Dashboard)
  10. Reporting, integrity checks (Integrity & Invariants)
  11. Roadmap & extensibility (Future Extensions)

Book Structure

Core Architecture

User-Facing Applications

Project Planning

Design Tenets

  • Single source of truth per concern (e.g., confirmed assignments live in UserRegistrations + domain rosters after materialization).
  • Idempotent transitions (finalize!, materialize_allocation!).
  • Append/extend rather than mutate history (overrides, policy traces).
  • Pluggable strategies & policies

Domain Model

This chapter summarizes principal entities; authoritative behavioral details live in the referenced chapters.

Registration

ComponentTypeDescription
Registration::CampaignActiveRecordTime‑bounded process (modes: FCFS, preference_based)
Registration::ItemActiveRecordWrapper exposing a registerable option under a campaign
Registration::UserRegistrationActiveRecord(user, item) intent + status (pending/confirmed/rejected) + optional preference_rank
Registration::PolicyActiveRecordOrdered eligibility rule (student_performance, institutional_email, prerequisite_campaign, custom_script)
Registration::CampaignableConcernEnables a model to host registration campaigns
Registration::RegisterableConcernEnables a model to be an option within a campaign
Registration::PolicyEngineServiceExecutes ordered active policies; short‑circuits on first failure
Registration::AllocationMaterializerServiceApplies confirmed allocations → registerable.materialize_allocation!

Rosters

ComponentTypeDescription
Roster::RosterableConcernUnified roster API (roster_user_ids, replace_roster!, etc.)
Roster::MaintenanceServiceServicePost-allocation admin (move/add/remove) with capacity enforcement

Assessments & Grading

ComponentTypeDescription
Assessment::AssessmentActiveRecordGradebook container for graded work (assignment, exam, talk, achievement)
Assessment::ParticipationActiveRecordPer-user totals, grade, status, submission timestamps
Assessment::TaskActiveRecordPer-assessment graded component (only if requires_points)
Assessment::TaskPointActiveRecordPer (participation, task) points + grader + submission link
Assessment::AssessableConcernEnables a model to be linked to an Assessment::Assessment
Assessment::PointableConcernExtends Assessable to enable per-task point tracking
Assessment::GradableConcernExtends Assessable to enable final grade recording
Assessment::SubmissionGraderServiceSubmission-centered fan-out to TaskPoints for team grading
SubmissionActiveRecordTeam-capable artifact optionally linked to a task

Student Performance & Certification

ComponentTypeDescription
StudentPerformance::RecordActiveRecordMaterialized factual performance data per (lecture, user): points_total, achievements_met
StudentPerformance::RuleActiveRecordConfiguration for eligibility criteria (min_points, required_achievements, assessment_types)
StudentPerformance::CertificationActiveRecordTeacher's eligibility decision per (lecture, user) with status (passed/failed/pending)
AchievementActiveRecordAssessable type for qualitative accomplishments (e.g., blackboard presentations) with Assessment infrastructure
StudentPerformance::ServiceServiceComputes and upserts Records from coursework points and achievements
StudentPerformance::EvaluatorServiceGenerates eligibility proposals by evaluating Records against Rules
Registration::Policy (kind: student_performance)IntegrationChecks Certification.status during exam registration (no runtime recomputation)

Grading Schemes

ComponentTypeDescription
GradeScheme::SchemeActiveRecordVersioned configuration for converting assessment points to final grades
GradeScheme::ApplierServiceApplies scheme to compute and persist final grades for all participations

Allocation Algorithm

ComponentTypeDescription
Registration::AllocationServiceServiceStrategy dispatcher using pluggable solvers (Min-Cost Flow, future CP-SAT)
Registration::Solvers::MinCostFlowServiceOR-Tools SimpleMinCostFlow implementation for bipartite preference allocation
Registration::Solvers::CpSatServiceFuture CP-SAT solver for advanced constraints (fairness, mutual exclusion, quotas)

Linking Concepts

What are linking concepts?

These are the "glue" entities that connect the core domain models (User, Lecture, Tutorial, etc.) to the systems above. They enable domain models to participate in registration, assessment, and eligibility tracking.

Core Domain Models:

  • User - Students, teachers, tutors who participate in the system
  • Lecture - A course offering (e.g., "Linear Algebra WS 2024/25")
  • Tutorial - A tutorial group within a lecture
  • Talk - A student presentation or seminar talk
  • Assignment - A homework assignment
  • Exam - An exam assessment

How they link to the systems:

Domain ModelLinks ToViaPurpose
LectureRegistrationRegistration::Campaignable concernHost exam registration campaigns
TutorialRegistrationRegistration::Registerable concernBecome a registerable option in tutorial allocation
ExamRegistrationRegistration::Campaignable concernHost exam registration campaigns
AssignmentAssessmentAssessment::Pointable concernTrack per-task points for homework
ExamAssessmentAssessment::Gradable concernRecord final exam grades
TalkAssessmentAssessment::Gradable concernGrade student presentations
UserAll SystemsDirect associationsStudent participates in registrations, assessments, eligibility
LectureExam EligibilityDirect associationScope eligibility records to specific lecture

Example Flows:

  1. Tutorial Registration: Lecture (campaignable) → creates Registration::Campaign → contains Registration::Item wrapping Tutorial (registerable) → students submit Registration::UserRegistration

  2. Exam Registration: Lecture (campaignable) → creates Registration::Campaign → contains Registration::Item wrapping Exam (registerable) → students submit Registration::UserRegistration → policies check eligibility

  3. Homework Grading: Assignment (pointable) → linked to Assessment::Assessment → contains Assessment::Task → tutors record Assessment::TaskPoint → aggregated into Assessment::Participation

  4. Exam Eligibility: Lecture → students complete Assignment assessments → StudentPerformance::Service aggregates points into StudentPerformance::RecordStudentPerformance::Evaluator generates proposals → teacher creates StudentPerformance::CertificationRegistration::Policy (kind: student_performance) checks Certification.status when student attempts Exam registration via the lecture's exam campaign

High-Level ERD (Simplified)

erDiagram
    USER ||--o{ REGISTRATION_USER_REGISTRATION : submits
    REGISTRATION_CAMPAIGN ||--o{ REGISTRATION_ITEM : has
    REGISTRATION_ITEM ||--o{ REGISTRATION_USER_REGISTRATION : options
    REGISTRATION_CAMPAIGN ||--o{ REGISTRATION_POLICY : guards
    REGISTRATION_ITEM }o--|| REGISTERABLE : polymorphic
    ASSESSMENT ||--o{ ASSESSMENT_PARTICIPATION : has
    ASSESSMENT ||--o{ TASK : has
    TASK ||--o{ TASK_POINT : points
    ASSESSMENT_PARTICIPATION ||--o{ TASK_POINT : aggregates
    SUBMISSION ||--o{ TASK_POINT : optional
    USER ||--o{ ASSESSMENT_PARTICIPATION : participates
    LECTURE_PERFORMANCE_RECORD }o--|| USER : "factual data"
    LECTURE_PERFORMANCE_RECORD }o--|| LECTURE : "scoped to"
    LECTURE_PERFORMANCE_CERTIFICATION }o--|| USER : "decision for"
    LECTURE_PERFORMANCE_CERTIFICATION }o--|| LECTURE : "scoped to"
    LECTURE_PERFORMANCE_CERTIFICATION }o--|| LECTURE_PERFORMANCE_RECORD : "based on (optional)"
    ACHIEVEMENT }o--|| USER : "accomplished by"
    ACHIEVEMENT }o--|| LECTURE : "belongs to"
    GRADE_SCHEME_SCHEME ||--|| ASSESSMENT : "applies to"

See details:

Registration System

What is a 'Registration System'?

A registration system manages time-bounded processes where users sign up for course-related activities with constraints and preferences.

  • Common Examples: "Tutorial signup for Linear Algebra", "Seminar talk selection", "Exam registration with eligibility checks"
  • In this context: A flexible campaign-based system supporting direct assignment, preference-based allocation, and composable eligibility policies with automated domain materialization.

Problem Overview

MaMpf needs a flexible registration system to handle:

  • Regular courses: Students register for tutorials within a lecture
  • Seminars: Students register for talks within a seminar (special type of lecture)
  • Mixed scenarios: Combining lecture enrollment with tutorial/talk assignment via a chained process

Solution Architecture

We use a unified system with:

  • Registration Campaigns: Time-bounded processes for registration
  • Polymorphic Design: Any model can become registerable or campaignable (host campaigns)
  • Two-step Chaining: Optional prerequisite campaigns (e.g., must register for seminar before selecting talks) implemented via a prerequisite_campaign policy
  • Allocation Persistence: Store the final allocation (confirmed vs rejected) and optional per-item counters
  • Strategy Layer: Pluggable solver for preference-based allocation (Min-Cost Flow now; CP-SAT later)
  • Domain Materialization (mandatory): After allocation, propagate confirmed assignments back into domain models (e.g., populate talk speakers, tutorial rosters)
  • Registration Policies: Composable eligibility rules (student performance, institutional email, prerequisite, etc.)
  • Policy Phases: Policies declare a phase: registration, finalization, or both. Only policies applicable to the current phase are evaluated/enforced. See Student Performance → Certification (05-student-performance.md) for how finalization uses Certification.
  • Policy Engine: Phase-aware evaluation of ordered active policies; short-circuits on first failure

Glossary (Registration)

  • Allocation mode: Enum selecting first_come_first_served or preference_based.
  • AllocationService: Computes allocations (preference-based) via allocate!.
  • AllocationMaterializer: Applies confirmed allocations to domain rosters.
  • Campaign methods: allocate!, finalize!, allocate_and_finalize!.
  • Policy phases: registration gates intake; finalization gates roster materialization; both applies in both places.
  • Assigned users: Users with confirmed status in the registration system (Registration::UserRegistration.confirmed). This is registration-side data.
  • Allocated users: Users materialized into the domain roster after finalization (Tutorial#students, Talk#speakers, etc.). This is domain-side data. After finalization, assigned and allocated should match.

Registration::Campaign (ActiveRecord Model)

The Registration Process Orchestrator

What it represents

A time-bounded administrative process where users can register for specific items under a chosen mode.

Think of it as

“Tutorial Registration Week”, “Seminar Talk Selection Period”, “Exam Signup”

The main fields and methods of Registration::Campaign are:

Name/FieldType/KindDescription
campaignable_typeDB columnPolymorphic type for the campaign host (e.g., Lecture)
campaignable_idDB columnPolymorphic ID for the campaign host
titleDB columnHuman-readable campaign title
allocation_modeDB column (Enum)Registration mode: first_come_first_served or preference_based
statusDB column (Enum)Campaign state: draft, open, closed, processing, completed
planning_onlyDB column (Bool)Planning/reporting only; prevents materialization/finalization (default: false)
registration_deadlineDB columnDeadline for user registrations (registration requests)
registration_itemsAssociationItems available for registration within this campaign
user_registrationsAssociationUser registrations (registration requests) for this campaign
registration_policiesAssociationEligibility and other policies attached to this campaign
evaluate_policies_for(user, phase: :registration)MethodReturns a structured eligibility result for the given phase (delegates to Policy Engine)
policies_satisfied?(user, phase: :registration)MethodBoolean convenience that returns true when all applicable policies pass
open_for_registrations?MethodReturns true if campaign is currently accepting registrations
allocate!MethodComputes allocation (preference-based) without materialization
finalize!MethodEnforces finalization-phase policies, then materializes the latest allocation into domain rosters
allocate_and_finalize!MethodConvenience: computes allocation and then finalizes

Note

Eligibility is not a single field or method, but is determined dynamically by evaluating all active registration_policies for the campaign using the evaluate_policies_for(user, phase:) method, which delegates to the phase-aware policy engine. Use policies_satisfied?(user, phase:) as a boolean convenience.

API at a glance

  • evaluate_policies_for(user, phase: :registration) → Result (fields: pass, failed_policy, trace, details)
  • policies_satisfied?(user, phase: :registration) → Boolean (true when all applicable policies pass)
  • open_for_registrations? → Boolean (campaign currently accepts registrations)

See also: Controller endpoints in Controller Architecture → Registration Controllers.

Behavior Highlights

  • Guards registration window (open?)
  • Delegates fine-grained eligibility to ordered RegistrationPolicies via Policy Engine
  • Triggers solver (preference-based) after close (often at/after deadline)
  • Finalizes and materializes allocation once only (idempotent)

Assigned vs Unassigned

  • Assigned: the student has exactly one confirmed Registration::UserRegistration in the campaign after allocation/close.
  • Unassigned: the student participated (has registrations) but has zero confirmed entries. On close/finalization, any remaining pending entries are normalized to rejected so the state is explicit.
  • No extra tables are required. Helper methods on Registration::Campaign can expose unassigned_user_ids, unassigned_users, and unassigned_count computed from UserRegistration records.

Status semantics

Statuses are mode-specific:

  • First-come-first-served (FCFS): registrations are immediately confirmed or rejected.
  • Preference-based: registrations are pending until allocation, then resolved to confirmed or rejected on finalize.

Do not overload pending to represent eligibility uncertainty in FCFS; use policy details (e.g., stability) purely for UI messaging.

Close vs Finalize

  • Close registration: stops intake and edits; transitions open → closed. Used to lock the window early or when the deadline passes automatically.
  • Run allocation (preference-based only): triggers solver; transitions closed → processing. FCFS campaigns skip this step (results already determined).
  • Finalize results: before materialization, evaluates all active policies whose phase is finalization or both for each confirmed user (via a Registration::FinalizationGuard). A student_performance policy in finalization phase requires Certification=passed for all confirmed users. If any user fails a finalization-phase policy (or has missing/pending certification) the process aborts and status remains processing (or closed for FCFS) for remediation. After passing guards, materializes confirmed results and transitions to completed.
  • Planning-only campaigns: close only; do not call finalize!. Results remain in reporting tables and are not materialized. When planning_only is true, finalize!/allocate_and_finalize! are no-ops.
  • Lecture performance completeness checks:
    • Campaign save: Warns if any students lack certifications (any phase with student_performance policy)
    • Campaign open: Hard-fails if any students have missing/pending certifications (registration or both phase)
    • Campaign finalize: Hard-fails if any confirmed registrants have missing/pending certifications (finalization or both phase); auto-rejects students with failed certifications

See also: Student Performance → Certification (05-student-performance.md).

Future Extension: Scheduled Campaign Opening

Currently, campaigns transition draft → open via manual teacher action. A future enhancement could add automatic opening via registration_start timestamp and background job. See Future Extensions - Scheduled Opening for details.

UI hooks for unassigned

After completion, the Campaign Show can surface an "Unassigned registrants" table (name, matriculation, top preferences) with actions to place users into groups via Roster Maintenance. In roster screens, add a filter "Candidates from campaign X" that lists these unassigned users for quick moves.

Campaign Lifecycle & Freezing Rules

Campaigns transition through several states to ensure data integrity and fair user experience. Certain attributes freeze at specific lifecycle points to prevent inconsistent or unfair changes.

State Definitions:

  • draft: Campaign is being configured, not visible to students
  • open: Registration window is active, students can register
  • closed: Registration window ended (automatically at deadline or manually)
  • processing: Allocation algorithm running (preference-based only)
  • completed: Results published, rosters materialized

Freezing Rules

Campaign Attributes
AttributeFreeze PointModification Rules
allocation_modeAfter draftCannot change once opened. Students make decisions based on mode (early registration for FCFS vs. preference ranking).
registration_opens_atAfter draftCannot change once opened. Opening time is in the past.
registration_deadlineNeverCan be extended anytime. Shortening is allowed but discouraged (confusing UX).
planning_onlyNeverCan be toggled anytime. Affects internal behavior, not student-facing.
Policies
ActionFreeze PointModification Rules
Add/Edit/RemoveAfter draftCannot add, edit, or remove policies once opened. New policies could invalidate existing registrations (especially in FCFS where spots are already confirmed).
Items
ActionFreeze PointModification Rules
Add itemNeverCan always add new items. Gives students more options without invalidating existing choices.
Remove itemAfter draftCannot remove items with existing registrations. Students may have registered for (FCFS) or ranked (preference) that item.
Capacity Constraints
ModeFreeze PointModification Rules
FCFSConstrainedCan increase anytime. Can decrease only if new_capacity >= confirmed_count for that item. Cannot revoke confirmed spots.
Preference-basedAfter completedCan change freely while draft, open, or closed (allocation hasn't run). Freezes once completed (results published).

Implementation Notes

Validation Example:

validate :allocation_mode_frozen_after_open, on: :update
validate :policies_frozen_after_open, on: :update
validate :capacity_decrease_respects_confirmed, on: :update

def allocation_mode_frozen_after_open
  if allocation_mode_changed? && !draft?
    errors.add(:allocation_mode, "cannot be changed after campaign opens")
  end
end

Item Removal:

  • Check item.user_registrations.exists? before allowing deletion
  • Alternative: Soft-delete (set active: false) instead of destroying

UI Feedback:

  • Disable/gray out frozen fields in forms
  • Show tooltips explaining why changes are blocked
  • Display warning before opening campaign: "Settings will be locked after opening"

Reopening Campaigns

When reopening a completed campaign (transitioning back to open), all freezing rules still apply. The campaign returns to accepting registrations, but fundamental settings (mode, policies, items) remain locked.

Example Implementation (Phase-aware planned state)

module Registration
  class Campaign < ApplicationRecord
    belongs_to :campaignable, polymorphic: true
    has_many :registration_items,
             class_name: "Registration::Item",
             dependent: :destroy
    has_many :user_registrations,
             class_name: "Registration::UserRegistration",
             dependent: :destroy
    has_many :registration_policies,
             class_name: "Registration::Policy",
             dependent: :destroy

    enum allocation_mode: { first_come_first_served: 0, preference_based: 1 }
    enum status: { draft: 0, open: 1, closed: 2, processing: 3, completed: 4 }

    validates :title, :registration_deadline, presence: true

    def evaluate_policies_for(user, phase: :registration)
      if phase == :registration
        return Registration::PolicyEngine::Result.new(pass: false, code: :campaign_not_open) unless open?
      end
      engine = Registration::PolicyEngine.new(self)
      engine.eligible?(user, phase: phase)
    end

    def policies_satisfied?(user, phase: :registration)
      evaluate_policies_for(user, phase: phase).pass
    end

    def open_for_registrations?
      open?
    end

    def finalize!
      return false if planning_only?
      return false unless closed? || processing?
      Registration::FinalizationGuard.new(self).check!
      Registration::AllocationMaterializer.new(self).materialize!
      update!(status: :completed)
    end

    def allocate!
      return false unless preference_based? && closed?
      update!(status: :processing)
      Registration::AllocationService.new(self, strategy: :min_cost_flow).allocate!
      true
    end

    def allocate_and_finalize!
      return false if planning_only?
      return false unless allocate!
      finalize!
    end

    def close!
      update!(status: :closed) if status == "open"
    end
  end
end

The system automatically calls close! when registration_deadline is reached via a scheduled job.

Usage Scenarios

Info

Entry points: Teacher/Editor starts at Campaigns index; Student starts at Student Registration index.

  • A "Tutorial Registration" campaign is created for a Lecture. It's preference_based and allows students to rank their preferred tutorial slots. Items point to Tutorial. (Admin UI: Tutorial Show (open); Student UI: Show – preference-based, Confirmation)
  • A "Talk Assignment" campaign is created for a Lecture (often a seminar). It's preference_based or first_come_first_served and assigns talk slots. Items point to Talk.
  • A "Lecture Registration" campaign is created for a Lecture (commonly seminars). It's typically first_come_first_served and enrolls students directly. The single item points to the Lecture. (Student UI: Show – FCFS)
  • A "Seminar Enrollment" campaign is created for a Lecture (acting as a seminar). It's first_come_first_served to quickly fill the limited seminar seats. (Student UI: Show – FCFS)
  • An "Interest Registration" campaign is created for a Lecture before the term to gauge demand (planning-only). It's first_come_first_served with a very high capacity; when it ends, you do not call finalize!. Results are used for hiring/planning and are not materialized to rosters. (Admin UI: Interest Show (draft))
  • An "Exam Registration" campaign is created for an Exam. It is first_come_first_served and may include a student_performance policy (phase: registration or both) for advisory eligibility messaging; finalization enforces Certification=passed only if a finalization-phase student_performance policy exists. Items point to Exam. (Admin UI: Exam Show; Student UI: Show – exam (FCFS); see also action required: institutional email)

Planning-only campaigns (Interest Registration)

Planning-only Interest Registration

Goal: Measure demand before a lecture starts to plan staffing (e.g., hire tutors) without changing any rosters.

  • Host: Lecture (campaignable).
  • Items: Single item pointing to the Lecture (registerable).
  • Mode: first_come_first_served.
  • Capacity: Very high (effectively unlimited) to capture demand signal.
  • Timing: Open well before the term; close before main registrations.
  • Finalization: Do not invoke finalize!. No domain materialization occurs.
  • Reporting: Use counts from Registration::UserRegistration (e.g., confirmed) for planning and exports.

See also the Campaigns index mockups where the planning-only row appears as "Interest Registration" with a note like "Planning only; not materialized".

Registration::Campaignable (Concern)

The Campaign Host

What it represents

A role for domain models (like Lecture) that allows them to 'host' or own registration campaigns.

Think of it as

The 'container' for a set of related registration campaigns. A lecture 'contains' the campaign for its tutorials.

Responsibilities

  • Provides a central point for grouping related campaigns.
  • Simplifies finding campaigns related to a specific object (e.g., all registrations for a given lecture).

Example Implementation

# app/models/concerns/registration/campaignable.rb
module Registration
  module Campaignable
    extend ActiveSupport::Concern

    included do
      has_many :registration_campaigns,
               as: :campaignable,
               class_name: "Registration::Campaign",
               dependent: :destroy
    end
  end
end

Implementations Here

  • Lecture: Hosts campaigns for its tutorials or talks.
  • Exam: Hosts a campaign for exam seat registration.

Registration::Item (ActiveRecord Model)

The Selectable Catalog Entry

What it represents

A selectable entry in a Registration::Campaign's "catalog". Each entry points to a real-world Registerable object (like a Tutorial or Talk).

Think of it as

  • Restaurant Analogy: An item on a restaurant menu. The Registerable is the actual dish prepared in the kitchen. The RegistrationItem is the line on the menu for a specific day (the campaign). You order from the menu, not by pointing at the dish in the kitchen.

  • Teaching Analogy: A slot in the registration system. The Registerable is the actual tutorial group that meets every Monday at 10am. The RegistrationItem is the entry for that tutorial in this semester's "Linear Algebra" registration (the campaign). Students sign up for the slot in the system, not by walking into the classroom.

The main fields and methods of Registration::Item are:

Name/FieldType/KindDescription
registration_campaign_idDB columnForeign key for the parent campaign.
registerable_typeDB columnPolymorphic type for the registerable object (e.g., Tutorial).
registerable_idDB columnPolymorphic ID for the registerable object.
registration_campaignAssociationThe parent Registration::Campaign.
registerableAssociationThe underlying domain object (e.g., a Tutorial instance).
user_registrationsAssociationAll user registrations (registration requests) for this item.
assigned_usersMethodReturns users with confirmed registration (registration system data).
capacityMethodThe maximum number of users, delegated from the registerable.
module Registration
  class Item < ApplicationRecord
    belongs_to :registration_campaign,
               class_name: "Registration::Campaign"
    belongs_to :registerable, polymorphic: true
    has_many :user_registrations,
             class_name: "Registration::UserRegistration",
             dependent: :destroy

    def assigned_users
      user_registrations.confirmed.includes(:user).map(&:user)
    end
  end
end

Usage Scenarios

Each scenario below is the item-side view of the campaign types listed earlier. The Registration::Item belongs to the associated campaign and wraps the concrete registerable record that users ultimately get assigned to.

  • For a "Tutorial Registration" campaign: A RegistrationItem is created for each Tutorial (e.g., "Tutorial A (Mon 10:00)"). The registerable association points to the Tutorial record.
  • For a "Talk Assignment" campaign: A RegistrationItem is created for each Talk (e.g., "Talk: Machine Learning Advances"). The registerable association points to the Talk record.
  • For a "Lecture Registration" campaign: A RegistrationItem is created for the lecture itself. The registerable association points to the Lecture record. This will be useful mostly when the lecture is a seminar. Lecture then has a dual role: as campaignable and as registerable.
  • For an "Exam Registration" campaign: A RegistrationItem is created for the exam itself. The registerable association points to the Exam record. The campaign's campaignable is the parent Lecture. Each exam (Hauptklausur, Nachklausur, Wiederholungsklausur) gets its own campaign hosted by the lecture, with that exam as the sole registerable item.

Registration::Item vs. Registration::Registerable

It's crucial to understand the difference between these two concepts:

  • Registration::Registerable is the actual domain object that a user is ultimately assigned to. Think of it as the real-world entity, like a Tutorial or a Talk. It's a role provided by a concern.

  • Registration::Item is a proxy or wrapper that makes a registerable object available within a specific campaign. Think of it as a "listing in a catalog." If you have a "Tutorial Registration" campaign, you create one Registration::Item for each Tutorial that students can sign up for in that campaign.

Users register for a Registration::Item, not directly for a Registerable. This separation allows the same Tutorial to potentially be part of different campaigns over time without conflict.


Registration::Registerable (Concern)

The Registration Target

What it represents

A role for domain models (like Tutorial or Talk) that allows them to be the ultimate target of a registration.

Think of it as

The actual group or event a user is enrolled in, such as a specific tutorial group or being assigned as the speaker for a talk.

Responsibilities

  • Provide a capacity (fixed column or computed).
  • Implement materialize_allocation!(user_ids:, campaign:) to apply confirmed results idempotently.
  • Remain agnostic of solver or eligibility logic.

Not Responsibilities

  • Eligibility checks (policies handle that).
  • Storing pending registrations (that’s UserRegistration).
  • Orchestrating allocation (that's the Registration::Campaign).

Public Interface

MethodPurposeRequired
capacityInteger seat count.Yes
materialize_allocation!(user_ids:, campaign:)Persists the authoritative roster for this campaign.Yes
allocated_user_idsCurrent materialized users from domain roster (delegates to roster system).Yes
remaining_capacity, full?Convenience derived helpers.Optional

Example Implementation

# app/models/concerns/registration/registerable.rb
module Registration
  module Registerable
    extend ActiveSupport::Concern

    def capacity
      self[:capacity] || raise(NotImplementedError, "#{self.class} must define #capacity")
    end

    def allocated_user_ids
      raise NotImplementedError, "#{self.class} must implement #allocated_user_ids to delegate to roster"
    end

    def remaining_capacity
      [capacity - allocated_user_ids.size, 0].max
    end

    def full?
      remaining_capacity.zero?
    end

    def materialize_allocation!(user_ids:, campaign:)
      raise NotImplementedError, "#{self.class} must implement #materialize_allocation!"
    end
  end
end

Implementation Details

The Registration::Item model uses belongs_to :registerable, polymorphic: true. Any model that includes the Registration::Registerable concern (e.g., Tutorial, Talk) becomes a valid target for this association.

The materialize_allocation! method is the most critical part of the interface. It is responsible for taking the final list of user_ids from the allocation process and persisting them into the domain model's own roster.

This method must be idempotent, meaning running it multiple times with the same user_ids and campaign produces the same result. A common pattern is to first remove all roster entries associated with the given campaign and then add the new ones, all within a single database transaction. Concrete examples are shown in the Tutorial and Talk sections later in this document.

The allocated_user_ids method must be implemented by each registerable model to delegate to its roster system. This returns the current materialized roster (domain data), as opposed to Registration::Item#assigned_users which returns users with confirmed registrations (registration system data). After finalization, these should match.

Usage Scenarios

  • A Tutorial includes Registerable to manage its student roster.
  • A Talk includes Registerable to designate students as its speakers.
  • A Lecture (acting as a seminar) includes Registerable to manage direct enrollment.
  • A future Exam model would include Registerable to manage allocation for an exam.

Registration::UserRegistration (ActiveRecord Model)

A User's Application for an Item

What it represents

A record of a single user's application for a specific item within a campaign.

Think of it as

A user's 'ballot' or 'application form' for one specific choice. In preference-based mode, it's one ranked choice on their list.

The main fields and methods of Registration::UserRegistration are:

Name/FieldType/KindDescription
user_idDB columnForeign key for the user submitting.
registration_campaign_idDB columnForeign key for the parent campaign.
registration_item_idDB columnForeign key for the selected item.
statusDB column (Enum)pending, confirmed, rejected.
preference_rankDB columnNullable integer for preference-based mode.
userAssociationThe user who submitted.
registration_campaignAssociationThe parent campaign.
registration_itemAssociationThe selected item.

Behavior Highlights

  • The status tracks the lifecycle: pending (awaiting allocation), confirmed (successful), or rejected (unsuccessful).
  • The preference_rank is only used in preference_based campaigns and must be unique per user within a campaign.
  • In first_come_first_served mode, a registration is typically created directly with confirmed status if capacity allows.
  • Business logic should enforce that a user can only have one confirmed registration per campaign.

Example Implementation

module Registration
  class UserRegistration < ApplicationRecord
    belongs_to :user
    belongs_to :registration_campaign,
               class_name: "Registration::Campaign"
    belongs_to :registration_item,
               class_name: "Registration::Item"

    enum status: { pending: 0, confirmed: 1, rejected: 2 }

    validates :preference_rank,
              presence: true,
              if: -> { registration_campaign.preference_based? }
    validates :preference_rank,
              uniqueness: { scope: [:user_id, :registration_campaign_id] },
              allow_nil: true
  end
end

Usage Scenarios

  • Preference-based: Alice submits two Registration::UserRegistration records for a campaign: one for "Tutorial A" with preference_rank: 1, and one for "Tutorial B" with preference_rank: 2. Both have status: :pending.
  • First-Come-First-Served: Bob registers for the "Seminar Algebraic Geometry". A single Registration::UserRegistration record is created with status: :confirmed immediately, as long as there is capacity.

First-Come-First-Served Workflow

In FCFS mode, registration status is determined immediately upon submission:

Controller Logic (recommended):

# app/controllers/registration/user_registrations_controller.rb
def create
  campaign = Registration::Campaign.find(params[:campaign_id])
  item = campaign.registration_items.find(params[:item_id])

  return unless campaign.policies_satisfied?(current_user, phase: :registration)

  status = item.remaining_capacity > 0 ? :confirmed : :rejected

  Registration::UserRegistration.create!(
    user: current_user,
    registration_campaign: campaign,
    registration_item: item,
    status: status,
    preference_rank: nil  # Not used in FCFS
  )
end

Key Differences from Preference-Based:

AspectFCFSPreference-Based
Initial status:confirmed or :rejectedAlways :pending
When decidedImmediately on createAfter allocation runs
Multiple itemsUser registers for ONE itemUser ranks MULTIPLE items
Solver neededNoYes
FinalizationOptional (roster may already be live)Required

Capacity Enforcement:

  • Check item.remaining_capacity before creating the registration
  • If capacity exhausted, create with status: :rejected (no waitlist)
  • Alternatively, return error and don't create record at all

Registration::Policy (ActiveRecord Model)

A Composable Eligibility Rule

What it represents

A single, configurable eligibility condition attached to a campaign.

Think of it as

“One rule card” (student performance gate, email domain restriction, prerequisite confirmation).

The main fields and methods of Registration::Policy are:

Name/FieldType/KindDescription
registration_campaign_idDB columnForeign key for the parent campaign.
kindDB column (Enum)The type of rule to apply (e.g., student_performance).
phaseDB column (Enum)registration, finalization, or both.
configDB column (JSONB)Parameters for the rule (e.g., { "allowed_domains": ["uni-heidelberg.de "] }).
positionDB columnThe evaluation order for policies within a campaign.
activeDB columnA boolean to enable or disable the policy.
registration_campaignAssociationThe parent Registration::Campaign.
evaluate(user)MethodEvaluates the policy for a given user and returns a result hash.

Behavior Highlights

  • Policies are evaluated in ascending position order.
  • The PolicyEngine short-circuits on the first policy that fails.
  • Returns a structured outcome ({ pass: true/false, ... }) for clear feedback.
  • Adding a new rule type involves adding to the kind enum and implementing its logic in evaluate, with no schema changes required.

The evaluate method of a policy returns a hash. While the top-level structure is consistent (containing a boolean pass key), individual policies can enrich the result with a details hash, providing context-specific information. This is particularly useful for complex rules like student performance eligibility.

Lecture performance advisory payload

For early exam registration messaging, the student_performance policy attaches a concise details hash (points, required_points, stability). Rich progress and "may still become eligible" guidance lives in the Student Performance views, not here.

Example Implementation

module Registration
  class Policy < ApplicationRecord
    belongs_to :registration_campaign,
               class_name: "Registration::Campaign"
    acts_as_list scope: :registration_campaign

    enum kind: {
      student_performance: "student_performance",
      institutional_email: "institutional_email",
      prerequisite_campaign: "prerequisite_campaign",
      custom_script: "custom_script"
    }

    enum phase: {
      registration: "registration",
      finalization: "finalization",
      both: "both"
    }

    scope :active, -> { where(active: true) }
    scope :for_phase, ->(p) { where(phase: ["both", p.to_s]) }

    def evaluate(user)
  case kind.to_sym
  when :student_performance then eval_student_performance(user)
      when :institutional_email then eval_email(user)
      when :prerequisite_campaign then eval_prereq(user)
      when :custom_script then eval_custom(user)
      else fail_result(:unknown_kind, "Unknown policy kind")
      end
    end

    private

    def pass_result(code = :ok, details = {})
      { pass: true, code: code, details: details }
    end

    def fail_result(code, message, details = {})
      { pass: false, code: code, message: message, details: details }
    end

    def eval_student_performance(user)
      lecture = Lecture.find(config["lecture_id"])

      cert = StudentPerformance::Certification.find_by(lecture: lecture, user: user)

      if cert&.passed?
        pass_result(:certification_passed)
      else
        fail_result(
          :certification_not_passed,
          "Lecture performance certification required",
          certification_status: cert&.status || :missing
        )
      end
    end

    def eval_email(user)
      allowed = Array(config["allowed_domains"])
      return pass_result(:no_constraint) if allowed.empty?
      domain = user.email.to_s.split("@").last
      if allowed.include?(domain)
        pass_result(:domain_ok)
      else
        fail_result(:domain_blocked, "Email domain not allowed",
                    domain: domain, allowed: allowed)
      end
    end

    def eval_prereq(user)
      prereq_id = config["prerequisite_campaign_id"]
      return fail_result(:missing_prerequisite_id, "No prerequisite specified") unless prereq_id

      prereq_campaign = Registration::Campaign.find_by(id: prereq_id)
      return fail_result(:prerequisite_not_found, "Prerequisite campaign not found") unless prereq_campaign

      lecture = prereq_campaign.campaignable
      ok = lecture.respond_to?(:roster) && lecture.roster.include?(user)

      ok ? pass_result(:prerequisite_ok) : fail_result(:prerequisite_missing, "Not on prerequisite roster")
    end

    def eval_custom(_user)
      pass_result(:custom_not_implemented)
    end
  end
end

Policy config: typed UI, JSONB storage

config is stored as JSONB for flexibility, but the UI must present typed fields per policy kind. Do not expose raw JSON to end users. Normalize inputs and validate per kind in the model. Controllers whitelist per-kind config keys.

Policy Result Reference

Each policy kind returns a standardized result hash with optional details. The following table documents the expected details keys for each policy kind:

Policy KindSuccess details KeysFailure details KeysExample
student_performanceNonecertification_status (:missing, :pending, :failed){ certification_status: :pending }
institutional_emailNonedomain (string), allowed (array of strings){ domain: "gmail.com", allowed: ["uni.edu"] }
prerequisite_campaignNoneprerequisite_campaign_id (integer){ prerequisite_campaign_id: 42 }
custom_scriptDefined by scriptDefined by scriptN/A (implementation-specific)

All results include:

  • pass (boolean): Whether the policy passed
  • code (symbol): Machine-readable result code (e.g., :certification_passed, :domain_blocked)
  • message (string, optional): Human-readable message (only on failure)
  • details (hash, optional): Additional context as documented above

Why JSONB for Policy.config?

Policies are composable and heterogeneous. Each kind needs different parameters (domains list, lecture reference, prerequisite campaign id, future custom scripts). Using JSONB for config avoids schema churn and lets us:

  • Add new policy kinds without migrations.
  • Evolve per-kind parameters independently.
  • Keep the public API stable (kind, config), while the typed UI and per-kind validations enforce structure.

Constraints and guardrails:

  • The UI is typed per kind; users never edit raw JSON.
  • Models validate allowed keys and shapes per kind.
  • Index JSONB keys if needed for queries (e.g., config ->> 'lecture_id').
  • Only minimal data belongs here. For exam eligibility, thresholds and criteria live in StudentPerformance::Rule; the policy stores only { "lecture_id": <id> }.

See UI: Policies tab in Exam Show.

Usage Scenarios

  • Email constraint: kind: :institutional_email, phase: :registration, config: { "allowed_domains": ["uni.edu"] }
  • Lecture performance gate (advisory + enforcement): kind: :student_performance, phase: :both, config: { "lecture_id": 42 }
  • Prerequisite: kind: :prerequisite_campaign, phase: :registration, config: { "prerequisite_campaign_id": 55 }

Registration::PolicyEngine (Service Object)

The Eligibility Pipeline

What it represents

A service that evaluates a user's eligibility by processing all of a campaign's active policies in order.

Think of it as

An 'eligibility checklist' processor that stops at the first failed check and provides a trace.

Public Interface

MethodPurpose
initialize(campaign)Sets up the engine with the campaign whose policies will be used.
eligible?(user)Evaluates policies for the user and returns a structured Result.

Behavior Highlights

  • Iterates policies in position order.
  • Stops at the first failure (fast fail).
  • Returns a structured Result object containing the pass/fail status, the policy that failed (if any), and a full trace of all evaluations.
  • This Result object is used by Registration::Campaign#evaluate_policies_for to provide clear feedback to the UI.

Lecture performance: data completeness requirement

Unlike other policies, student_performance requires data preparation before the phase starts. Campaign save/open/finalize will validate that all required certifications exist and are non-pending. See Student Performance chapter (05-student-performance.md) for pre-flight validation details.

Freshness vs certification

The student_performance policy checks the Certification table at runtime (no JIT recomputation during registration). Facts (Record) are updated by background jobs or teacher-triggered recomputation. This keeps registration fast and deterministic.

Example Implementation

module Registration
  class PolicyEngine
    Result = Struct.new(:pass, :failed_policy, :trace, keyword_init: true)

    def initialize(campaign)
      @campaign = campaign
    end

    def eligible?(user, phase: :registration)
      trace = []
      applicable = @campaign.registration_policies.active.for_phase(phase).order(:position)
      applicable.each do |policy|
        outcome = policy.evaluate(user)
        trace << { policy_id: policy.id, kind: policy.kind, phase: policy.phase, outcome: outcome }
        return Result.new(pass: false, failed_policy: policy, trace: trace) unless outcome[:pass]
      end
      Result.new(pass: true, failed_policy: nil, trace: trace)
    end
  end
end

Usage Scenarios

  • A trace showing two passed registration-phase policies and one failed policy produces a clear message to the user.
  • A finalization guard iterates confirmed users with phase: :finalization; any failure aborts materialization.

Registration::AllocationService (Service Object)

The Allocation Solver

What it represents

A service object that encapsulates the complex logic of assigning users to items based on their preferences and a chosen strategy.

Think of it as

The 'brain' that solves the puzzle of who gets what in a preference-based campaign.

Public Interface

MethodPurpose
initialize(campaign, strategy:)Sets up the service with a campaign and a specific allocation strategy.
allocate!Executes the allocation logic based on the chosen strategy.

Responsibilities

  • Takes a Registration::Campaign as input.
  • Gathers all pending Registration::UserRegistration records with their preference ranks.
  • Gathers all Registration::Item records with their capacities.
  • Executes a specific allocation strategy (e.g., Min-Cost Flow) to find an optimal assignment.
  • Updates the status of each Registration::UserRegistration to either :confirmed or :rejected based on the solver's output.

Not Responsibilities

  • It does not materialize the results into the final domain models (e.g., Tutorial rosters). That is handled by the AllocationMaterializer called within finalize!. This keeps the concerns of "solving the assignment" and "persisting the results" separate.

Implementation Details

The service uses a Strategy Pattern to delegate the actual solving to a dedicated class based on the chosen strategy. This allows for different solver implementations (e.g., Min-Cost Flow, CP-SAT) to be used interchangeably.

For a detailed breakdown of the graph modeling and solver implementation, see the Allocation Algorithm Details chapter.

Example Implementation

# This service acts as a dispatcher for different solver strategies.
module Registration
  class AllocationService
    def initialize(campaign, strategy: :min_cost_flow, **opts)
      @campaign = campaign
      @strategy = strategy
      @opts = opts
    end

    def allocate!
      solver =
        case @strategy
        when :min_cost_flow then Registration::Solvers::MinCostFlow.new(@campaign, **@opts)
        # when :cp_sat then Registration::Solvers::CpSat.new(@campaign, **@opts) # Future
        else
          raise ArgumentError, "Unknown strategy: #{@strategy}"
        end
      solver.run
    end
  end
end

# Example of a concrete solver strategy class.
# See 07-algorithm-details.md for the full implementation.
module Registration
  module Solvers
    class MinCostFlow
      def initialize(campaign, **opts)
        @campaign = campaign
        # ... gather users, items, preferences ...
      end

      def run
        # 1. Build the graph model for the solver
        # 2. Solve the model
        # 3. Persist the results back to Registration::UserRegistration statuses
      end
    end
  end
end

Usage Scenarios

  • After the deadline for a preference_based tutorial registration campaign, a background job calls Registration::AllocationService.new(campaign).allocate!. The service runs the solver and updates thousands of Registration::UserRegistration records to either :confirmed or :rejected.
  • An administrator manually triggers the assignment for a seminar's talk selection via a button in the UI, which in turn calls this service.

Registration::AllocationMaterializer (Service Object)

The Roster Populator

What it represents

A service that translates the final allocation results (Registration::UserRegistration statuses) into concrete domain rosters.

Think of it as

The "secretary" that takes the list of confirmed attendees from the registration system and updates the official class lists.

Public Interface

MethodPurpose
initialize(campaign)Sets up the materializer with the campaign to be finalized.
materialize!Executes the materialization process.

Responsibilities

  • Gathers all confirmed Registration::UserRegistration records for the campaign.
  • Groups them by their Registration::Item.
  • For each Registration::Item, it calls materialize_allocation! on the underlying registerable object, passing the final list of user IDs.
  • This process is the crucial hand-off from the temporary registration system to the permanent domain models.

Example Implementation

module Registration
  class AllocationMaterializer
    # Missing top-level docstring, please formulate one yourself 😁
    def initialize(campaign)
      @campaign = campaign
    end

    def materialize!
      registrations_by_item = @campaign.user_registrations
                                       .confirmed
                                       .includes(:registration_item)
                                       .group_by(&:registration_item)

      ActiveRecord::Base.transaction do
        registrations_by_item.each do |item, registrations|
          user_ids = registrations.map(&:user_id)
          item.registerable.materialize_allocation!(user_ids: user_ids, campaign: @campaign)
        end
      end
    end
  end
end

Registration::FinalizationGuard (Service Object)

The Finalization Gatekeeper

What it represents

Ensures every confirmed user passes all finalization-phase policies before roster materialization. For student_performance policies, enforces certification completeness and auto-rejects failed certifications.

Public Interface

MethodPurpose
initialize(campaign)Prepare guard for a campaign.
check!Raises on first violation; returns true when all confirmed users pass. Auto-rejects students with failed student performance certifications.

Example Implementation

module Registration
  class FinalizationGuard
    def initialize(campaign)
      @campaign = campaign
    end

    def check!
      policies = @campaign.registration_policies.active.for_phase(:finalization).order(:position)
      return true if policies.empty?

      confirmed = @campaign.user_registrations.confirmed.includes(:user)

      confirmed.each do |ur|
        user = ur.user
        policies.each do |policy|
          if policy.kind == "student_performance"
            lecture = Lecture.find(policy.config["lecture_id"])
            cert = StudentPerformance::Certification.find_by(lecture: lecture, user: user)

            if cert.nil? || cert.pending?
              raise StandardError, "Finalization blocked: certification missing or pending for user #{user.id}"
            elsif cert.failed?
              ur.update!(status: :rejected)
              next
            end
          else
            outcome = policy.evaluate(user)
            unless outcome[:pass]
              raise StandardError, "Finalization blocked by policy #{policy.id} (#{policy.kind})"
            end
          end
        end
      end
      true
    end
  end
end

Behavior Highlights

  • Auto-reject failed certifications: Students with StudentPerformance::Certification.status == :failed are automatically moved to rejected status
  • Hard-fail on missing/pending: If any confirmed student has no certification or status: :pending, raise error and block finalization
  • Remediation UI trigger: The error message should trigger UI showing which students need certification resolution
  • Other policies: Evaluated normally; any failure blocks finalization

See also: Student Performance → Certification and Pre-flight Validation (05-student-performance.md).


Enhanced Domain Models

The following sections describe how existing MaMpf models will be enhanced to integrate with the registration system.

User (Enhanced)

The Registrant

What it represents

Existing MaMpf user; no schema changes required.

Example Implementation

class User < ApplicationRecord
  has_many :user_registrations,
       class_name: "Registration::UserRegistration",
       dependent: :destroy
  has_many :registration_campaigns,
       through: :user_registrations
  has_many :registration_items,
       through: :user_registrations
end

Lecture (Enhanced)

The Primary Host and Seminar Target

What it represents

  • Existing MaMpf lecture model that can both host campaigns and be registered for.

Dual Role

  • As Registration::Campaignable: Can organize tutorial registration or talk selection campaigns.
  • As Registration::Registerable: Students can register for the lecture itself (common for seminars).

Example Implementation

class Lecture < ApplicationRecord
  include Registration::Campaignable      # Can host campaigns for tutorials/talks
  include Registration::Registerable      # Can be registered for (seminar enrollment)
    # ... existing code ...

    # Implements the contract from the Registerable concern
    def materialize_allocation!(user_ids:, campaign:)
        # This method is the hand-off point to the roster management system.
        # Its responsibility is to take the final list of user IDs and
        # persist them as the official roster for this lecture (seminar),
        # sourced from this specific campaign.
        #
        # The concrete implementation using the Roster::Rosterable concern is detailed
        # in the "Allocation & Rosters" chapter.
    end
end

Tutorial (Enhanced)

A Common Registration Target

What it represents

Existing MaMpf tutorial model that students can register for.

Example Implementation

class Tutorial < ApplicationRecord
  include Registration::Registerable
    # ... existing code ...

    # Implements the contract from the Registerable concern
    def materialize_allocation!(user_ids:, campaign:)
        # This method is the hand-off point to the roster management system.
        # Its responsibility is to take the final list of user IDs and
        # persist them as the official roster for this tutorial, sourced
        # from this specific campaign.
        #
        # The concrete implementation using the Roster::Rosterable concern is detailed
        # in the "Allocation & Rosters" chapter.
    end
end

Talk (Enhanced)

A Target for Speaker Allocation

What it represents

Existing MaMpf talk model that can be assigned to students.

Example Implementation

class Talk < ApplicationRecord
  include Registration::Registerable
    # ... existing code ...

    # Implements the contract from the Registerable concern
    def materialize_allocation!(user_ids:, campaign:)
        # Similar to the Tutorial, this method hands off the final list
        # of speakers to the roster management system.
        #
        # The concrete implementation using the Roster::Rosterable concern is detailed
        # in the "Allocation & Rosters" chapter.
    end
end

Campaign Lifecycle (State Diagram)

stateDiagram-v2
    [*] --> draft
    draft --> open : open
    open --> closed : close (manual or at deadline)
    closed --> completed : finalize! (optional)

    note right of closed
        Regular FCFS campaigns: finalize to materialize rosters.
        Planning-only: stay in closed, skip finalize.
    end note

ERD

erDiagram
  "Registration::Campaign" ||--o{ "Registration::Item" : has
  "Registration::Campaign" ||--o{ "Registration::Policy" : has
  "Registration::Campaign" ||--o{ "Registration::UserRegistration" : has
  "Registration::Item" ||--o{ "Registration::UserRegistration" : has
  USER ||--o{ "Registration::UserRegistration" : submits
  "Registration::Item" }o--|| REGISTERABLE : polymorphic
  "Registration::Campaign" }o--|| CAMPAIGNABLE : polymorphic

Preference-Based State Diagram

stateDiagram-v2
    [*] --> draft: Campaign created
    draft --> open: Admin opens campaign
    open --> closed: Admin closes OR deadline reached
    closed --> processing: Allocation runs
    processing --> completed: Admin finalizes
    
    note right of draft
        Admin configures items,
        policies, deadline
    end note
    
    note right of open
        Users submit preferences;
        all status: pending
    end note
    
    note right of processing
        Registration closed;
        run allocation solver
    end note
    
    note right of completed
        Allocation finalized;
        rosters materialized
    end note

Sequence Diagram (Preference-Based Flow)

This diagram shows the typical lifecycle for a preference-based campaign.

sequenceDiagram
    actor User
    participant Controller
    participant Campaign as Registration::Campaign
    participant UserReg as Registration::UserRegistration
    actor Job as Background Job
    participant AllocationSvc as Registration::AllocationService
    participant Solver as Registration::Solvers::MinCostFlow
    participant Materializer as Registration::AllocationMaterializer
    participant RegTarget as Registerable (e.g., Tutorial)

    rect rgb(235, 245, 255)
    note over User,Controller: Registration phase (campaign is open)
    User->>Controller: Visit campaign page
    Controller->>Campaign: evaluate_policies_for(user, phase: :registration)
    alt eligible
        Controller-->>User: Show preference ranking form
        User->>Controller: Submit preferences
        loop for each preference
            Controller->>UserReg: create(user_id, item_id, rank)
        end
        Controller-->>User: Preferences saved
    else not eligible
        Controller-->>User: Show reason from PolicyEngine
    end
    end

    note over User,Job: Deadline passes

    rect rgb(255, 245, 235)
    note over Job,RegTarget: Allocation & finalization
    Job->>Campaign: allocate_and_finalize!
    Campaign->>Campaign: update!(status: :closed)
    Campaign->>AllocationSvc: new(campaign).allocate!
    AllocationSvc->>Solver: new(campaign).run()
    note right of Solver: Build graph, solve, persist statuses
    Solver->>UserReg: update_all(status: confirmed/rejected)
    Campaign->>Campaign: update!(status: :processing)
    Campaign->>Campaign: finalize!
    Campaign->>Materializer: new(campaign).materialize!
    Materializer->>RegTarget: materialize_allocation!(user_ids, campaign)
    note right of RegTarget: Update roster (idempotent)
    Campaign->>Campaign: update!(status: :completed)
    end

FCFS State Diagram

stateDiagram-v2
    [*] --> draft: Campaign created
    draft --> open: Admin opens campaign
    open --> closed: Admin closes OR deadline reached
    closed --> completed: Admin finalizes (optional)
    
    note right of draft
        Admin configures items,
        policies, deadline
    end note
    
    note right of open
        Users submit registrations;
        immediate confirm/reject
    end note
    
    note right of closed
        Registration closed;
        results visible
    end note
    
    note right of completed
        For planning-only:
        skip finalize!
        
        For materialization:
        finalize! applies to rosters
    end note

Sequence Diagram (FCFS Flow)

This diagram shows the lifecycle for a first-come-first-served campaign.

sequenceDiagram
    actor Student
    participant UI as Student UI
    participant Controller as UserRegistrationsController
    participant Campaign
    participant Item
    participant UserReg as UserRegistration
    participant PolicyEngine
    participant Roster as Domain Roster

    rect rgb(235, 245, 255)
    note over Student,PolicyEngine: Registration phase (campaign is open)
    Student->>UI: Visit campaign page
    UI->>Controller: GET /campaigns/:id
    Controller->>Campaign: find(campaign_id)
    Controller->>Campaign: open_for_registrations?
    Campaign-->>Controller: true
    
    Controller->>Campaign: evaluate_policies_for(user, phase: :registration)
    Campaign->>PolicyEngine: eligible?(user, phase: :registration)
    PolicyEngine-->>Campaign: Result(pass: true/false, ...)
    Campaign-->>Controller: Result
    
    alt policies fail
        Controller-->>UI: Ineligible state
        UI-->>Student: Show error: "Not eligible (reason)"
    else policies pass
        Controller-->>UI: Show register buttons
        UI-->>Student: Display available items
        
        Student->>UI: Click "Register for Item X"
        UI->>Controller: POST /campaigns/:id/user_registrations
        
        Controller->>Item: find(item_id)
        Controller->>Item: remaining_capacity
        Item-->>Controller: capacity count
        
        alt capacity available
            Controller->>UserReg: create!(status: :confirmed, ...)
            UserReg-->>Controller: registration record
            Controller-->>UI: Success
            UI-->>Student: "Registered successfully"
        else capacity exhausted
            Controller->>UserReg: create!(status: :rejected, ...)
            UserReg-->>Controller: registration record
            Controller-->>UI: Info: "No capacity"
            UI-->>Student: "Item full, registration rejected"
        end
    end
    end
    
    note over Student,Roster: Later: Admin closes campaign
    
    rect rgb(255, 245, 235)
    note over Student,Roster: View results (processing state)
    Student->>UI: View results
    UI->>Controller: GET /campaigns/:id
    Controller->>Campaign: status
    Campaign-->>Controller: :closed, :processing, or :completed
    Controller-->>UI: Show campaign with status
    UI-->>Student: Display confirmed/rejected
    end
    
    rect rgb(245, 255, 235)
    note over Controller,Roster: Optional: Admin finalizes (materialization)
    Controller->>Campaign: finalize!
    Campaign->>Campaign: evaluate_policies_for(confirmed_users, phase: :finalization)
    alt finalization policies fail
        Campaign-->>Controller: Error (stays in :processing)
    else finalization policies pass
        Campaign->>Item: materialize_allocation!(confirmed_user_ids)
        Item->>Roster: Update domain roster
        Roster-->>Item: Done
        Item-->>Campaign: Done
        Campaign->>Campaign: update!(status: :completed)
        Campaign-->>Controller: Success
    end
    end

Proposed Folder Structure

To keep the new components organized according to Rails conventions, the new files would be placed as follows:

app/
├── models/
│   ├── concerns/
│   │   └── registration/
│   │       ├── campaignable.rb
│   │       └── registerable.rb
│   └── registration/
│       ├── campaign.rb
│       ├── item.rb
│       ├── policy.rb
│       └── user_registration.rb
│
└── services/
  └── registration/
    ├── solvers/
    │   ├── min_cost_flow.rb
    │   └── cp_sat.rb (future)
    ├── allocation_service.rb
    ├── allocation_materializer.rb
    └── policy_engine.rb

This structure separates the ActiveRecord models, shared concerns, and business logic (service objects and solvers) into their conventional directories.

Key Files

  • app/models/registration/campaign.rb - Orchestrates the registration process
  • app/models/registration/user_registration.rb - Records user registrations (registration requests)
  • app/models/registration/policy.rb - Defines eligibility rules
  • app/services/registration/allocation_service.rb - Runs allocation solver
  • app/services/registration/allocation_materializer.rb - Persists results to domain models

Database Tables

  • registration_campaigns - Campaign orchestration records
  • registration_items - Catalog entries linking campaigns to registerables
  • registration_user_registrations - User registration request records with status and preference rank
  • registration_policies - Eligibility rules with kind, phase, config, and position

Note

Column details for each table are documented in the respective model sections above.

Rosters

What is a 'Roster'?

A "roster" is a list of names of people belonging to a particular group, team, or event.

  • Common Examples: A class roster (a list of all students in a class), a team roster (a list of all players on a sports team), or a duty roster (a schedule showing who is working at what time).
  • In this context: It refers to the official list of students enrolled in a tutorial or the list of speakers assigned to a seminar talk.

Problem Overview

  • After campaigns are completed and allocations are materialized into domain models, staff must maintain real rosters.

Sourcing candidates after allocation

When a preference-based campaign completes, some participants may remain unassigned. These are students with zero confirmed registrations in the campaign (pending entries are normalized to rejected on close). You can source these candidates directly from the campaign in roster tools without a separate waitlist table.

Managing unassigned candidates

  • View unassigned candidates: from the completed Campaign Show or on the Roster Overview via a right-side panel "Candidates from campaign" showing only users unassigned in that campaign.
  • Inspect context: for preference-based campaigns, show each student's top 3 original preferences inline, with a way to view the full list on demand.
  • Actions:
    • Assign to group: place the student into a specific tutorial if capacity permits.
    • Move: standard roster operations continue to work across groups.

Placement

The Candidates panel lives on the Roster Overview to provide capacity context across all groups. The Roster Detail focuses on per-group maintenance (participants list, remove/move) and has no candidates panel.

Tip

No reason entry is required for remove, move, or add actions in roster maintenance. Keep actions fast; capacity constraints still apply.

Data model

Unassigned candidates are derived from Registration::UserRegistration records. No extra table is needed; use campaign-scoped queries to list users with zero confirmed entries.

- Typical actions: move users between tutorials/talks, add late-comers, remove dropouts, apply exceptional overrides. - Non-goals: This system does not re-run the automated solver (`Registration::AllocationService`) or reopen the campaign. It is strictly for manual roster adjustments after the initial allocation is complete.

Solution Architecture

  • Canonical Source: Domain rosters on registerable models (e.g., Tutorial.students, Talk.speakers).
  • Uniform API: A Roster::Rosterable concern provides a consistent interface (roster_user_ids, replace_roster!, etc.).
  • Single Service: A Roster::MaintenanceService handles atomic moves, adds, and removals with capacity checks and logging.
  • Campaign-Independent: Actions operate directly on Roster::Rosterable models; no campaign context is needed for manual changes.
  • Fast Dashboards: The maintenance service can update denormalized counters like Registration::Item.assigned_count to keep UIs in sync.
  • Auditing (Future Enhancement): The service includes a log() method as a hook for future auditing. This can be implemented later to write to a dedicated audit trail (e.g., a RosterChangeEvent model or using a gem like PaperTrail). This would provide a full history of all manual roster modifications, separate from the immutable record of the initial automated assignment stored in Registration::UserRegistration.
  • Exam-specific finalization: When materializing exam rosters from a campaign, eligibility is revalidated at finalize-time; registrants who became ineligible are excluded unless an override exists.

Roster::Rosterable (Concern)

The Universal Roster API

What it represents

A concern that gives any Registration::Registerable model a uniform roster management interface.

Think of it as

The “contract” required by the maintenance service, defining how to read and write to a model's roster.

Public Interface & Contract

MethodProvided/RequiredDescription
roster_user_idsRequired (Override)Returns the current list of user IDs on the roster as an Array<Integer>.
replace_roster!(user_ids:)Required (Override)Atomically replaces the entire roster with the given list of user IDs.
roster_entriesRequired (Override)Returns an ActiveRecord relation to the join table for campaign tracking.
mark_campaign_source!(user_ids, campaign)Required (Override)Marks the given user roster entries as sourced from the specified campaign.
allocated_user_idsProvidedDelegates to roster_user_ids to satisfy Registration::Registerable contract.
materialize_allocation!(user_ids:, campaign:)ProvidedImplements the allocation materialization from Registration::Registerable.
add_user_to_roster!(user_id)Provided (private)Adds a single user to the roster if not already present.
remove_user_from_roster!(user_id)Provided (private)Removes a single user from the roster.

Behavior Highlights

  • Explicit Contract: The concern raises a NotImplementedError if an including class fails to override required methods (#roster_user_ids, #replace_roster!, #roster_entries, #mark_campaign_source!), ensuring the contract is met.
  • Idempotent: Calling replace_roster! with the same set of IDs should result in no change.
  • Registration Integration: Provides allocated_user_ids and materialize_allocation! to satisfy the Registration::Registerable interface, allowing rosters to be managed by the registration system.
  • Campaign Tracking: The materialize_allocation! method preserves manually-added roster entries while replacing campaign-sourced entries, using the source_campaign field on join table records. // ...existing code...

Example Implementation

# filepath: app/models/concerns/roster/rosterable.rb
module Roster
  module Rosterable
    extend ActiveSupport::Concern

    def roster_user_ids
      raise NotImplementedError, "#{self.class.name} must implement #roster_user_ids"
    end

    def replace_roster!(user_ids:)
      raise NotImplementedError, "#{self.class.name} must implement #replace_roster!"
    end

    def allocated_user_ids
      roster_user_ids
    end

    def materialize_allocation!(user_ids:, campaign:)
      transaction do
        current_ids = roster_user_ids
        campaign_sourced_ids = current_ids.select do |uid|
          roster_entries.exists?(user_id: uid, source_campaign: campaign)
        end

        other_ids = current_ids - campaign_sourced_ids
        new_ids = (other_ids + user_ids).uniq

        replace_roster!(user_ids: new_ids)
        mark_campaign_source!(user_ids, campaign)
      end
    end

    private

    def add_user_to_roster!(user_id)
      ids = roster_user_ids
      return if ids.include?(user_id)
      replace_roster!(user_ids: ids + [user_id])
    end

    def remove_user_from_roster!(user_id)
      replace_roster!(user_ids: roster_user_ids - [user_id])
    end

    def roster_entries
      raise NotImplementedError, "#{self.class.name} must implement #roster_entries for campaign tracking"
    end

    def mark_campaign_source!(user_ids, campaign)
      raise NotImplementedError, "#{self.class.name} must implement #mark_campaign_source! for campaign tracking"
    end
  end
end

Usage Scenarios

  • Tutorial and Talk both include Roster::Rosterable.
  • Tutorial implements roster_user_ids by reading from a new tutorial_memberships join table (to be created).
  • Talk implements replace_roster! using its existing speaker_talk_joins association.

Roster::MaintenanceService

Staff Maintenance

What it represents

The single, safe entry point for all staff-initiated roster changes after an allocation is complete.

Think of it as

An admin “move/add/remove” service with capacity checks and logging.

How this is different from Registration::AllocationService

  • Registration::AllocationService is the automated solver that runs once to create the initial allocation.
  • Roster::MaintenanceService is the manual tool for staff to make individual changes to rosters after the campaign is finished.

Public Interface

MethodDescription
initialize(actor:)Sets up the service with the acting user for auditing.
move_user!(user_id:, from:, to:, ...)Atomically moves a user from one Roster::Rosterable to another.
add_user!(user_id:, to:, ...)Adds a user to a Roster::Rosterable.
remove_user!(user_id:, from:, ...)Removes a user from a Roster::Rosterable.

Behavior Highlights

  • Transactional: All operations, especially move_user!, are performed within a database transaction to ensure atomicity.
  • Capacity Enforcement: Enforces the capacity of the target Roster::Rosterable unless an allow_overfill: true flag is passed.
  • Auditing Hook: Calls a log() method to provide a hook for future audit trail implementation.
  • Denormalization: Can update denormalized counters like Registration::Item.assigned_count to keep dashboards in sync.

Example Implementation

# filepath: app/services/roster/maintenance_service.rb
class Roster::MaintenanceService
  def initialize(actor:)
    @actor = actor
  end

  def move_user!(user_id:, from:, to:, allow_overfill: false, reason: nil)
    raise ArgumentError, "type mismatch" unless from.class == to.class
    ActiveRecord::Base.transaction do
      enforce_capacity!(to) unless allow_overfill
      from.send(:remove_user_from_roster!, user_id)
      to.send(:add_user_to_roster!, user_id)
      touch_counts!(from, to)
      log(:move, user_id: user_id, from: from, to: to, reason: reason)
    end
  end

  def add_user!(user_id:, to:, allow_overfill: false, reason: nil)
    ActiveRecord::Base.transaction do
      enforce_capacity!(to) unless allow_overfill
      to.send(:add_user_to_roster!, user_id)
      touch_counts!(to)
      log(:add, user_id: user_id, to: to, reason: reason)
    end
  end

  def remove_user!(user_id:, from:, reason: nil)
    ActiveRecord::Base.transaction do
      from.send(:remove_user_from_roster!, user_id)
      touch_counts!(from)
      log(:remove, user_id: user_id, from: from, reason: reason)
    end
  end

  private

  def enforce_capacity!(rosterable)
    raise "Capacity reached" if rosterable.full?
  end

  def touch_counts!(*rosterables)
    # Logic to find associated Registration::Items and update assigned_count
  end

  def log(action, **data)
    # Hook for future auditing (e.g., create RosterChangeEvent record)
  end
end

Usage Scenarios

  • Moving a student: An administrator moves a student from a full tutorial to one with free space.

    service = Roster::MaintenanceService.new(actor: current_admin)
    tutorial_from = Tutorial.find(1)
    tutorial_to = Tutorial.find(2)
    student_id = 123
    service.move_user!(user_id: student_id, from: tutorial_from, to: tutorial_to, reason: "Balancing class sizes")
    
  • Adding a late-comer: A student who missed the deadline is manually added to a tutorial.

    service = Roster::MaintenanceService.new(actor: current_admin)
    tutorial = Tutorial.find(5)
    student_id = 456
    service.add_user!(user_id: student_id, to: tutorial, reason: "Late registration approved by professor")
    
  • Removing a dropout: A student officially drops the course.

    service = Roster::MaintenanceService.new(actor: current_admin)
    tutorial = Tutorial.find(3)
    student_id = 789
    service.remove_user!(user_id: student_id, from: tutorial, reason: "Student dropped course")
    

Enhanced Domain Models

The following sections describe how existing MaMpf models are enhanced to integrate with the roster management system by implementing the Rosterable concern.

Tutorial (Enhanced)

A Rosterable Target

What it represents

An existing MaMpf tutorial model, enhanced to manage its student list.

Rosterable Implementation

The Tutorial model includes the Roster::Rosterable concern to provide a standard interface for managing its student roster via a join table.

MethodImplementation Detail
roster_user_idsPlucks user_ids from the tutorial_memberships join table (to be created).
replace_roster!(user_ids:)Deletes existing memberships and creates new ones in a transaction.

Example Implementation

# filepath: app/models/tutorial.rb
class Tutorial < ApplicationRecord
  include Registration::Registerable
  include Roster::Rosterable

  has_many :tutorial_memberships, dependent: :destroy
  has_many :students, through: :tutorial_memberships, source: :user

  def roster_user_ids
    tutorial_memberships.pluck(:user_id)
  end

  def replace_roster!(user_ids:)
    TutorialMembership.transaction do
      tutorial_memberships.delete_all
      user_ids.each { |uid| tutorial_memberships.create!(user_id: uid) }
    end
  end

  def roster_entries
    tutorial_memberships
  end

  def mark_campaign_source!(user_ids, campaign)
    tutorial_memberships.where(user_id: user_ids)
                       .update_all(source_campaign_id: campaign.id)
  end
end

The tutorial_memberships table should include a source_campaign_id column (nullable) to track which campaign materialized each roster entry.


Talk (Enhanced)

A Rosterable Target

What it represents

An existing MaMpf talk model, enhanced to manage its speaker list.

Rosterable Implementation

The Talk model includes the Roster::Rosterable concern to provide a standard interface for managing its speakers.

MethodImplementation Detail
roster_user_idsPlucks speaker_ids from the speaker_talk_joins join table.
replace_roster!(user_ids:)Deletes existing joins and creates new ones in a transaction.

Example Implementation

# filepath: app/models/talk.rb
class Talk < ApplicationRecord
  include Registration::Registerable
  include Roster::Rosterable

  has_many :speaker_talk_joins, dependent: :destroy
  has_many :speakers, through: :speaker_talk_joins

  def roster_user_ids
    speaker_talk_joins.pluck(:speaker_id)
  end

  def replace_roster!(user_ids:)
    SpeakerTalkJoin.transaction do
      speaker_talk_joins.delete_all
      user_ids.each { |uid| speaker_talk_joins.create!(speaker_id: uid) }
    end
  end

  def roster_entries
    speaker_talk_joins
  end

  def mark_campaign_source!(user_ids, campaign)
    speaker_talk_joins.where(speaker_id: user_ids)
                      .update_all(source_campaign_id: campaign.id)
  end
end

The speaker_talk_joins table should include a source_campaign_id column (nullable) to track which campaign materialized each speaker assignment.

ERD for Roster Implementations

This diagram shows the concrete database relationships for the two example Roster::Rosterable implementations. The Roster::Rosterable concern provides a uniform API over these different underlying structures.

erDiagram
    TUTORIAL ||--o{ TUTORIAL_MEMBERSHIP : "has (to be created)"
    TUTORIAL_MEMBERSHIP }o--|| USER : "links to"

    TALK ||--o{ SPEAKER_TALK_JOIN : "has (existing)"
    SPEAKER_TALK_JOIN }o--|| USER : "links to"

Sequence Diagram

This diagram shows the two distinct phases: the initial, automated materialization of the roster, followed by ongoing manual maintenance by staff.

sequenceDiagram
    actor Admin
    participant Campaign as Registration::Campaign
    participant Materializer as Registration::AllocationMaterializer
    participant RosterService as Roster::MaintenanceService
    participant Rosterable as Roster::Rosterable (e.g., Tutorial)

    rect rgb(235, 245, 255)
    note over Campaign,Rosterable: Phase 1: Automated Materialization
    Campaign->>Materializer: new(campaign).materialize!
    Materializer->>Rosterable: materialize_allocation!(user_ids:, campaign:)
    note right of Rosterable: From Registration::Registerable
    Rosterable->>Rosterable: 1. Query current roster_user_ids
    Rosterable->>Rosterable: 2. Identify campaign-sourced entries
    Rosterable->>Rosterable: 3. Merge with new user_ids
    Rosterable->>Rosterable: 4. Call replace_roster!(merged_ids)
    Rosterable->>Rosterable: 5. Mark new entries with source_campaign_id
    end

    note over Admin, Rosterable: ... time passes ...

    rect rgb(255, 245, 235)
    note over Admin,Rosterable: Phase 2: Manual Roster Maintenance
    Admin->>RosterService: new(actor: admin).move_user!(...)
    RosterService->>Rosterable: from.remove_user_from_roster!(user_id)
    RosterService->>Rosterable: to.add_user_to_roster!(user_id)
    note right of Rosterable: Uses private methods from Roster::Rosterable concern
    end

Proposed Folder Structure

To keep the new components organized, the new files would be placed as follows:

app/
├── models/
│   └── concerns/
│       └── roster/
│           └── rosterable.rb
│
└── services/
    └── roster/
        └── maintenance_service.rb

Key Files

  • app/models/concerns/roster/rosterable.rb - Uniform roster API concern
  • app/services/roster/maintenance_service.rb - Manual roster modification service

Database Tables

The roster system doesn't introduce new database tables. Instead, it provides a uniform API over existing and to-be-created join tables:

  • tutorial_memberships (to be created) - Join table for tutorial student rosters
  • speaker_talk_joins (existing) - Join table for talk speaker assignments

Note

The Roster::Rosterable concern provides a uniform interface (roster_user_ids, replace_roster!) regardless of the underlying table structure. Column details are shown in the example implementations above.

Assessments & Grading

What is an 'Assessment'?

An "assessment" is a structured evaluation of student learning and performance.

  • Common Examples: A homework assignment with multiple problems, an exam or a seminar talk presentation.
  • In this context: It refers to the grading infrastructure for any evaluable artifact in MaMpf, encompassing both per-task point tracking and final grade recording.

Problem Overview

After registrations and allocations are finalized, MaMpf needs to:

  • Grade diverse items: Support assignments (with per-task points), exams (points + final grade), and talks (final grade only).
  • Handle team submissions: One file uploaded by a team should be graded once, with points automatically distributed to all team members.
  • Track granular progress: Break down assignments into tasks (problems), record points per task per student, and aggregate totals.
  • Support flexible workflows: Allow draft grading, provisional review, publication, and post-publication adjustments.
  • Maintain audit trails: Link graded submissions back to the points awarded for transparency and appeals.

Solution Architecture

We use a unified grading model with clear separation of concerns:

  • Canonical Source: Assessment::Assessment acts as the single gradebook for any graded work (Assignment, Exam, Talk, Achievement).
  • Dual Capability Model: Two concerns provide orthogonal features:
    • Assessment::Pointable: Enables per-task point tracking ("pointbook" mode).
    • Assessment::Gradable: Enables final grade recording without tasks ("gradebook" mode).
  • Participation Tracking: Assessment::Participation records aggregate points, grade, and status per (user, assessment).
  • Granular Points: Assessment::Task and Assessment::TaskPoint models support breakdown into graded components when requires_points = true.
  • Team-Aware Grading: Assessment::SubmissionGrader implements a fan-out pattern: grade one Submission, create Assessment::TaskPoint records for all team members.
  • Roster Integration: Participations are seeded from Roster::Rosterable models (tutorials, talks) or lecture rosters.
  • Idempotent Operations: Re-grading the same submission overwrites points consistently; totals are recomputed atomically.

Assessment::Assessment (ActiveRecord Model)

The Gradebook Container

What it represents

The central grading record for a single piece of graded work (assignment, exam, talk, or achievement). It holds configuration, tasks, and aggregates all student participations.

Think of it as

"Homework 3 Gradebook", "Final Exam Points Ledger", "Seminar Talk Grading Sheet"

The main fields and methods of Assessment are:

Name/FieldType/KindDescription
assessable_typeDB columnPolymorphic type for the graded work (e.g., Assignment, Exam, Talk, Achievement)
assessable_idDB columnPolymorphic ID for the graded work
lecture_idDB columnOptional foreign key for fast scoping to a lecture
titleDB columnHuman-readable assessment title
requires_pointsDB columnBoolean: whether this assessment tracks per-task points
requires_submissionDB columnBoolean: whether students must upload files
total_pointsDB columnOptional maximum points (computed from tasks if blank)
statusDB column (Enum)Workflow state: draft (0), open (1), closed (2), graded (3), archived (4)
visible_fromDB columnTimestamp when assessment becomes visible to students
due_atDB columnDeadline for submissions
results_publishedDB columnBoolean: whether students can see their points and grades
participationsAssociationAll student records for this assessment
tasksAssociationTasks (problems) for this assessment (only if requires_points)
task_pointsAssociationAll task points through participations
effective_total_pointsMethodReturns total_points or sum of task max_points
seed_participations_from!(user_ids:)MethodCreates participation records for given users

Submission Support - Current Scope

The requires_submission field is currently only used for Assignment types. Submission interfaces (upload, view, grade) are only implemented for assignments. For Exams and Talks, this field should remain false as no submission workflow exists for these types yet. See Future Extensions for planned support.

Behavior Highlights

  • Acts as the single source of truth for grading configuration
  • Guards task creation: tasks exist only when requires_points = true
  • Supports two modes: "pointbook" (granular task points) and "gradebook" (final grade only)
  • Aggregates student records (participations) which are seeded from rosters

Example Implementation

# filepath: app/models/assessment/assessment.rb
module Assessment
  class Assessment < ApplicationRecord
    belongs_to :assessable, polymorphic: true
    belongs_to :lecture, optional: true

    has_many :tasks, dependent: :destroy, class_name: "Assessment::Task"
    has_many :participations, dependent: :destroy,
      class_name: "Assessment::Participation"
    has_many :task_points, through: :participations,
      class_name: "Assessment::TaskPoint"

  enum status: { draft: 0, open: 1, closed: 2, graded: 3, archived: 4 }

  validates :title, presence: true
  validate :tasks_only_when_requires_points

  def effective_total_points
    total_points.presence || tasks.sum(:max_points)
  end

  def seed_participations_from!(user_ids:)
    existing = participations.pluck(:user_id).to_set
    (user_ids - existing.to_a).each do |uid|
      participations.create!(user_id: uid)
    end
  end

  private

  def tasks_only_when_requires_points
    if tasks.any? && !requires_points
      errors.add(:base, "Tasks are only allowed when requires_points is true")
    end
  end
end

Assessment Creation Timing (Implementation Details)

The timing of assessment creation differs by type to match real-world workflows:

Assignments & Exams (Explicit Creation):

  • Created explicitly via "New Assessment" UI in the Assessments tab
  • Teacher navigates to Lecture → Assessments → New Assessment → selects type
  • Both domain model (Assignment/Exam) and Assessment record created together in one transaction
  • Teacher controls exactly when the assessment is created during the semester

Talks (Automatic Creation):

  • Created automatically when Talk is created in the Content tab (seminars only)
  • Talks are created early for campaign registration, often before the semester starts
  • Assessment record is auto-generated via talk.ensure_gradebook! after Talk save
  • Participations seeded from speakers immediately
  • Grading happens later via the Assessments tab (see Grading Talks in Seminars)

Why the difference: Assignments and exams are created on-demand during the semester. Talks must exist early for registration campaigns, but grading happens much later—auto-creating the assessment ensures the grading infrastructure is ready when needed.

Usage Scenarios

  • For a homework assignment: A teacher creates an Assignment record via the "New Assessment" UI. The system creates both the Assignment and a linked Assessment::Assessment record in one transaction, configured with requires_points: true and requires_submission: true. The teacher adds tasks for each problem (P1, P2, P3). Student records are seeded automatically from the tutorial roster.

  • For an exam: A teacher creates an Exam record via the "New Assessment" UI. The system creates both the Exam and a linked Assessment::Assessment whose assessable is that exam, with requires_points: true to track per-question scores. After the teacher defines all tasks and grades them, a final grade_value can be computed and stored for each student to represent the official exam grade.

  • For a seminar talk: A teacher creates a Talk record in the Content tab. The system automatically creates a linked Assessment::Assessment whose assessable is that talk, with requires_points: false. Later, the teacher records only a final grade for each speaker via the Assessments tab—no tasks or submissions are needed.

  • For an achievement: A teacher creates an Achievement record via the "New Assessment" UI (e.g., "Blackboard Presentation" with value_type: boolean). The system creates both the Achievement and a linked Assessment::Assessment, configured with requires_points: false and requires_submission: false. Participations are seeded for all students in the lecture. Tutors mark completion by setting each participation's grade_value to "Pass" or "Fail" (for boolean), or entering a count/percentage (for numeric/percentage types).


Assessment::Participation (ActiveRecord Model)

Per-Student Grade Record

What it represents

A single student's grading record within an assessment. It tracks their total points, final grade, submission status, and links to all their task-level points.

Think of it as

One row in the gradebook spreadsheet for a specific student in a specific assessment.

Key Fields & Associations

Name/FieldType/KindDescription
assessment_idDB column (FK)The assessment this participation belongs to
user_idDB column (FK)The student being graded
tutorial_idDB column (FK)Tutorial context at participation creation time (optional, null for exams/talks)
points_totalDB columnAggregate points across all tasks (denormalized)
grade_valueDB columnFinal grade (e.g., "1.3", "Pass") - optional
statusDB column (Enum)Workflow state: not_started, in_progress, submitted, graded, exempt
submitted_atDB columnTimestamp when submission was uploaded (persists after grading)
grader_idDB column (FK)The tutor/teacher who graded this (optional)
graded_atDB columnTimestamp when grading was completed
results_published_atDB columnPer-participation publication timestamp (optional)
publishedDB columnBoolean: whether results are visible to the student
lockedDB columnBoolean: prevents further edits after publication
task_pointsAssociationAll task-level point records for this student in this assessment

Tutorial Context Details

The tutorial_id field captures which tutorial the student was in at the time of participation creation (when seed_participations_from_roster! runs during assessment setup). This field:

  • Is set once when participations are initialized from the roster
  • Is never updated if the student changes tutorials mid-semester
  • Is nullable for assessments without tutorial context (e.g., exams, talks)
  • Enables per-tutorial publication control for assignments
  • Provides performance optimization for tutor grading queries

Behavior Highlights

  • Enforces uniqueness per (assessment, user) via database constraint
  • Maintains points_total as the sum of all associated TaskPoint records
  • Preserves submission history via submitted_at even after status transitions to :graded
  • Can carry both granular points (via tasks) and a final grade (for exams)
  • Supports workflow states from initial submission through final grading
  • Provides locking mechanism to prevent post-publication tampering

Example Implementation

# filepath: app/models/assessment/participation.rb
module Assessment
  class Participation < ApplicationRecord
    self.table_name = "assessment_participations"

    belongs_to :assessment, class_name: "Assessment::Assessment"
    belongs_to :user
    belongs_to :tutorial, optional: true
    belongs_to :grader, class_name: "User", optional: true

    has_many :task_points, dependent: :destroy,
      class_name: "Assessment::TaskPoint"

  enum status: {
    not_started: 0,
    in_progress: 1,
    submitted: 2,
    graded: 3,
    exempt: 4
  }

  validates :user_id, uniqueness: { scope: :assessment_id }

  def recompute_points_total!
    update!(points_total: task_points.sum(:points))
  end

  def results_visible?
    results_published_at.present?
  end
end

Tutorial ID Behavior (Implementation Details)

The tutorial_id on participation is never updated after creation. It represents which tutorial the student was in when participations were initialized during assessment setup, not their current tutorial assignment.

When tutorial_id is set:

  • Assignments: Set when seed_participations_from_roster! runs after assignment creation, capturing the tutorial each student belongs to at that moment
  • Exams: Set to nil (exams don't have tutorial context)
  • Talks: Set to nil (talks have speakers, not tutorial participants)

Why it doesn't update:

  • Preserves historical grading context (which tutor graded this work)
  • Determines publication control (which tutorial can publish results)
  • Provides audit trail for grade complaints
  • Enables fast queries without roster joins

Edge case - student switches tutorials:

  • Participation keeps original tutorial_id
  • Original tutorial's tutor still grades their work
  • Original tutorial's publication controls still apply
  • If manual reassignment is needed, teacher can update tutorial_id as admin action

Usage Scenarios

  • After assessment setup: When an assignment is created, assignment.seed_participations_from_roster! runs, creating one Assessment::Participation record for each student across all lecture tutorials. Each participation is initialized with status: :not_started, points_total: 0, submitted_at: nil, and tutorial_id set to the tutorial the student currently belongs to.

  • Student submits work: A student uploads their homework file. The system sets their participation to status: :submitted and records submitted_at: Time.current. This timestamp persists even after grading. The tutorial_id remains unchanged.

  • After grading a submission: A tutor grades a team submission for Problem 1. The grading service creates or updates Assessment::TaskPoint records for each team member, then calls recompute_points_total! on their participation to update the aggregate score. The status transitions to :graded and graded_at is set, but submitted_at and tutorial_id remain unchanged—preserving the submission and tutorial history.

  • Publishing exam results: After all exam tasks are graded, the teacher marks participations as published: true and their status is :graded. Students can now see their points breakdown and final grade (if grade_value is set). Exam participations have tutorial_id: nil since exams don't have tutorial context.

  • Per-tutorial publication (assignments): Tutorial A completes grading on Monday. The tutor sets results_published_at: Time.current for all participations where tutorial_id = tutorial_a.id. Students in Tutorial A can now see their results. Tutorial B's students (with tutorial_id = tutorial_b.id and results_published_at: nil) still see "pending" status.

  • Handling exemptions: A student provides a medical certificate and is marked status: :exempt. Their participation record exists but no points are computed, no grade is assigned, and both submitted_at and graded_at remain nil. The tutorial_id is preserved for audit purposes.

  • Distinguishing submission vs non-submission: After grading is complete, the teacher can query submitted_at.present? to distinguish students who submitted work (even if they received 0 points for quality) from those who never submitted at all.

  • Student switches tutorials mid-semester: Alice is in Tutorial A when participations are initialized for Homework 3. Her participation has tutorial_id: 1 (Tutorial A). In week 6, she switches to Tutorial B. When Tutorial A publishes results, Alice's Homework 3 results become visible because her participation's tutorial_id still points to Tutorial A. Her future assignments will have new participations with tutorial_id: 2 (Tutorial B).


Assessment::Task (ActiveRecord Model)

Atomic Graded Component

What it represents

One graded component (problem, question, or rubric item) within an assessment that tracks points independently.

Think of it as

"Problem 1 (worth 10 points)" on a homework assignment or "Question 3 (worth 5 points)" on an exam.

Key Fields & Associations

Name/FieldType/KindDescription
assessment_idDB column (FK)The assessment this task belongs to
titleDB columnHuman-readable task name (e.g., "Problem 1", "Question 3a")
positionDB columnDisplay order within the assessment
max_pointsDB columnMaximum achievable points for this task
descriptionDB columnOptional detailed instructions or rubric text
task_pointsAssociationAll point records across all students for this task

Behavior Highlights

  • Exists only when the parent assessment has requires_points: true
  • Enforces max_points >= 0 via validation
  • Position determines display order in grading interfaces
  • Deletion cascades to all associated TaskPoint records

Example Implementation

# filepath: app/models/assessment/task.rb
module Assessment
  class Task < ApplicationRecord
    self.table_name = "assessment_tasks"

    belongs_to :assessment, class_name: "Assessment::Assessment"
    has_many :task_points, dependent: :destroy,
      class_name: "Assessment::TaskPoint"

    validates :title, presence: true
    validates :max_points, numericality: { greater_than_or_equal_to: 0 }
    validates :position, numericality: { only_integer: true }, allow_nil: true

    acts_as_list scope: :assessment
  end
end

Multiple Choice Exam Extension

For exams with multiple choice components requiring legal compliance, see the Multiple Choice Exams chapter. That extension adds is_multiple_choice and grade_scheme_id fields with associated validations.

Usage Scenarios

  • Creating tasks for a homework: After setting up an assignment's assessment, the teacher creates tasks: assessment.tasks.create!(title: "Problem 1", max_points: 10, position: 1), assessment.tasks.create!(title: "Problem 2", max_points: 15, position: 2). Each task defines a gradeable component.

  • Exam with multiple questions: An exam assessment has tasks for each question. A task titled "Question 3: Proof of Theorem" with max_points: 8 allows tutors to grade that specific question independently across all students.

  • Automatic total calculation: If the assessment's total_points field is blank, calling assessment.effective_total_points sums all task max_points values (e.g., 10 + 15 + 8 = 33 total points).

  • Reordering tasks: Teachers can adjust the position field to reorder how tasks appear in the grading interface without changing the underlying data structure.


Assessment::TaskPoint (ActiveRecord Model)

Per-Student, Per-Task Grade Record

What it represents

The points and feedback assigned to a specific student for a specific task within an assessment.

Think of it as

"Alice earned 8 out of 10 points on Problem 1, with comment: 'Minor calculation error in step 3.'"

Key Fields & Associations

Name/FieldType/KindDescription
assessment_participation_idDB column (FK)Links to the student's participation record
task_idDB column (FK)The task being graded
pointsDB columnPoints awarded (must be ≥ 0 and ≤ task.max_points)
commentDB columnOptional feedback text for the student
grader_idDB column (FK)The tutor who assigned these points (optional)
submission_idDB column (FK)Links to the graded submission for audit trail (optional)

Behavior Highlights

  • Enforces uniqueness per (participation, task) via database constraint
  • Triggers recomputation of Assessment::Participation.points_total on save
  • Visibility controlled by assessment.results_published, not per-task state
  • Links back to the specific submission that was graded for audit trails
  • Validation ensures points do not exceed task maximum
  • Maintains update history via updated_at for complaint resolution tracking

Example Implementation

# filepath: app/models/assessment/task_point.rb
module Assessment
  class TaskPoint < ApplicationRecord
    self.table_name = "assessment_task_points"

    belongs_to :assessment_participation,
      class_name: "Assessment::Participation"
    belongs_to :task, class_name: "Assessment::Task"
    belongs_to :grader, class_name: "User", optional: true
    belongs_to :submission, optional: true

  validates :points, numericality: { greater_than_or_equal_to: 0 }

  after_commit :bubble_totals

  private

  def bubble_totals
    assessment_participation.recompute_points_total!
  end
end

Extra Points Allowed

Points are allowed to exceed task maximum to support extra credit and bonus points scenarios. There is no upper bound validation on the points field.

Usage Scenarios

  • Grading a team submission: A tutor grades Problem 1 of a team homework. The grading service creates or updates one Assessment::TaskPoint record per team member, all with the same points value (e.g., 8/10), linking each to submission_id: 42 for audit purposes.

  • Bonus points: A tutor awards 12 points out of 10 for exceptional work on a problem. The system accepts this without validation errors, allowing the student's total to exceed the nominal maximum.

  • Publishing results: After completing all grading, the teacher sets assessment.results_published = true. Students can now see all their task points and comments at once.

  • Recomputation trigger: After saving a TaskPoint with 8 points, the after_commit callback automatically calls assessment_participation.recompute_points_total!, updating the student's aggregate score across all tasks.

  • Handling complaints: A student views their exam and submits a complaint about Question 2. The tutor reviews the work, agrees there was a grading error, and updates the Assessment::TaskPoint from 5 to 7 points. The updated_at timestamp records when the adjustment was made. The recomputation callback updates the student's points_total and potentially their final grade_value.

  • Audit trail: Months later, a student appeals their grade. The teacher queries task_point.submission to retrieve the original PDF that was graded, verifying the points awarded match the work submitted.

Re-grading and Corrections

The grading interface remains available even after an assessment transitions to graded status. This supports corrections for:

  • Discovered grading mistakes
  • Student complaints requiring point adjustments
  • Late bonus point awards

When accessing grading for a graded or published assessment, the UI should display a warning:

Results already published
Changes will be visible to students immediately. Continue?

This ensures teachers are aware that modifications affect published results. The results_published flag controls visibility, not editability—TaskPoint records remain mutable across all assessment states, and recompute_points_total! is idempotent.

Per-Tutorial Result Publication (Implementation Details)

For assignments with multiple tutorials, results can be published independently per tutorial as grading completes. This eliminates coordination burden and provides faster feedback to students.

Publication Model:

  • Each Assessment::Participation has a results_published_at timestamp (nullable)
  • Tutor can publish results for their tutorial when grading is complete
  • Publication is per-tutorial, not lecture-wide
  • Students see results when participation.results_visible? returns true

Implementation:

def results_visible?
  results_published_at.present?
end

Workflow:

  1. Tutorial A completes grading on Monday
  2. Tutor clicks "Publish Results for Tutorial A"
  3. System sets results_published_at = Time.current for all participations where tutorial_id = tutorial_a.id
  4. Students in Tutorial A immediately see their points and grades
  5. Tutorial B continues grading, their students still see "pending"
  6. Tutorial B completes Thursday, publishes independently

Benefits:

  • No waiting for slowest tutorial to finish
  • Tutors control their own publication timeline
  • Teacher oversight still possible (can hide results per tutorial)
  • Maintains audit trail of when results were released

Cross-Tutorial Teams (Edge Case): When team members are in different tutorials:

  • Publish when any member's tutorial publishes (permissive)
  • OR: Require all members' tutorials to publish (strict)
  • Recommended: Use permissive model for simplicity

Query Examples:

# Publish results for Tutorial X
tutorial_x_participations = assessment.participations
  .where(tutorial_id: tutorial_x.id)
tutorial_x_participations.update_all(results_published_at: Time.current)

# Student view query
participation.results_visible?  # true if results_published_at is set

# Teacher dashboard: which tutorials have published?
assessment.participations
  .select(:tutorial_id, "COUNT(*) as total")
  .where.not(results_published_at: nil)
  .group(:tutorial_id)

Exam and Talk Publication: Exams and talks have tutorial_id: nil on their participations. Publication control uses the legacy assessment.results_published boolean instead of per-participation timestamps. Per-tutorial publication only applies to assignments.


Assessment::Assessable (Concern)

Base Contract for Gradeable Models

What it represents

A concern that enables any domain model (Assignment, Exam, Talk, Achievement) to be linked to an Assessment::Assessment record and manage its grading lifecycle.

Think of it as

The minimal "make me gradeable" interface that all graded work must implement.

Public Interface

MethodDescription
assessmentReturns the linked Assessment::Assessment record (polymorphic has_one association)
ensure_assessment!(...)Creates or updates the linked Assessment::Assessment with given configuration
seed_participations_from_roster!Creates Assessment::Participation records for all students in the roster

Behavior Highlights

  • Establishes the polymorphic link via has_one :assessment, as: :assessable
  • Provides a safe method to create/update the assessment without duplication
  • seed_participations_from_roster! should be overridden by including classes to define roster logic
  • Does not enforce whether points or grades are used—that's delegated to Assessment::Pointable and Assessment::Gradable

Example Implementation

# filepath: app/models/assessment/assessable.rb
module Assessment
  module Assessable
    extend ActiveSupport::Concern

    included do
      has_one :assessment, as: :assessable, dependent: :destroy,
        class_name: "Assessment::Assessment"
  end

  def ensure_assessment!(title:, requires_points:, requires_submission: false,
                        visible_from: nil, due_at: nil)
    a = assessment || build_assessment
    a.title = title
    a.requires_points = requires_points
    a.requires_submission = requires_submission
    a.visible_from ||= visible_from if visible_from
    a.due_at ||= due_at if due_at
    a.lecture ||= try(:lecture)
    a.save! if a.changed?
    a
  end

  # Override this method in including classes to define roster logic
  # For Assignment: aggregate from lecture.tutorials
  # For Exam: use exam registration roster
  # For Talk: use speaker roster
  def seed_participations_from_roster!
    raise NotImplementedError,
      "#{self.class.name} must implement seed_participations_from_roster!"
  end
end
end

Usage Scenarios

  • Initial setup for an assignment: After creating an Assignment record, the teacher calls assignment.ensure_assessment!(title: "Homework 3", requires_points: true, requires_submission: true) to create the linked Assessment::Assessment. Then assignment.seed_participations_from_roster! aggregates students from all lecture tutorials and creates participation records for each.

  • Updating assessment metadata: A teacher realizes the due date was wrong and calls assignment.ensure_assessment!(title: "Homework 3", requires_points: true, due_at: 1.week.from_now) again. The method is idempotent—it updates the existing Assessment::Assessment rather than creating a duplicate.

  • For exams after registration: An Exam becomes Rosterable after its registration campaign is completed and allocations are materialized. When calling exam.seed_participations_from_roster!, the concern reads from the exam's roster (the confirmed exam registrants) to seed participations. Only students who successfully registered for the exam will have participation records created.


Assessment::Pointable (Concern)

Enables Per-Task Point Tracking

What it represents

A concern that extends Assessment::Assessable to enable granular, per-task point tracking for graded work that can be broken down into components.

Think of it as

"Turn on pointbook mode" for assignments and exams that need task-by-task grading.

Public Interface

MethodDescription
ensure_pointbook!(...)Creates or updates the linked Assessment::Assessment with requires_points: true

Behavior Highlights

  • Includes Assessment::Assessable and builds on its interface
  • Forces requires_points: true when creating the assessment
  • Enables the creation of Assessment::Task records for breaking down graded components
  • Allows optional submission requirement based on the work type
  • Assessment will aggregate points from all task-level grades

Example Implementation

# filepath: app/models/assessment/pointable.rb
module Assessment
  module Pointable
    extend ActiveSupport::Concern
    include Assessment::Assessable

  def ensure_pointbook!(title:, requires_submission: false, **opts)
    ensure_assessment!(
      title: title,
      requires_points: true,
      requires_submission: requires_submission,
      **opts
    )
  end
end

Usage Scenarios

  • For homework assignments: After creating an assignment, call assignment.ensure_pointbook!(title: "Homework 3", requires_submission: true, due_at: 1.week.from_now). The assessment is configured for task-level grading, and students must upload files. Tasks are then added for each problem.

  • For exams with per-question tracking: An exam includes this concern to track points per question. Call exam.ensure_pointbook!(title: "Final Exam", requires_submission: false) since students don't upload files for in-person exams. Tasks represent individual exam questions.

  • Idempotent reconfiguration: A teacher realizes they set the wrong due date and calls assignment.ensure_pointbook!(title: "Homework 3", requires_submission: true, due_at: 2.weeks.from_now). The method updates the existing assessment without creating a duplicate.


Assessment::Gradable (Concern)

Enables Final Grade Recording

What it represents

A concern that extends Assessment::Assessable to enable recording a final grade without task-level breakdown.

Think of it as

"Turn on gradebook mode" for seminar talks or other work that receives only a single grade.

Public Interface

MethodDescription
ensure_gradebook!(...)Creates or updates the linked Assessment::Assessment with requires_points: false by default while preserving an existing requires_points: true configuration
set_grade!(user:, value:, grader:)Records a final grade for a specific student

Behavior Highlights

  • Includes Assessment::Assessable and builds on its interface
  • Defaults requires_points to false when creating the assessment, but retains true if it was already enabled (e.g., when combined with Assessment::Pointable)
  • No tasks or submissions are required
  • Directly updates Assessment::Participation.grade_value for each student
  • Can be combined with Assessment::Pointable for exams that need both points and final grades

Example Implementation

# filepath: app/models/assessment/gradable.rb
module Assessment
  module Gradable
    extend ActiveSupport::Concern
    include Assessment::Assessable

  def ensure_gradebook!(title:, **opts)
    requires_points = assessment&.requires_points
    ensure_assessment!(
      title: title,
      requires_points: requires_points.nil? ? false : requires_points,
      requires_submission: false,
      **opts
    )
  end

  def set_grade!(user:, value:, grader: nil)
    a = assessment || raise("No gradebook; call ensure_gradebook! first")
    part = a.participations.find_or_create_by!(user_id: user.id)
    part.update!(
      grade_value: value,
      grader_id: grader&.id,
      graded_at: Time.current,
      status: :graded
    )
  end
end

Usage Scenarios

  • For seminar talks: After creating a talk, call talk.ensure_gradebook!(title: "Seminar Talk: Topology") to create an assessment without tasks. After the presentation, call talk.set_grade!(user: speaker, value: "1.0", grader: professor) to record the final grade.

  • For exams with final grades: An exam includes both Assessment::Pointable and Assessment::Gradable. After all tasks are graded and points computed, the teacher can call exam.set_grade!(user: student, value: "1.3", grader: professor) to store the official grade that appears on transcripts.

  • Idempotent grade updates: A teacher corrects a mistakenly entered grade by calling talk.set_grade! again with the new value. The method updates the existing participation record rather than creating a duplicate.


Enhanced Domain Models

The following sections describe how existing MaMpf models are enhanced to integrate with the assessment system by implementing the grading concerns.

Assignment (Enhanced)

A Pointable Target with Submissions

What it represents

An existing MaMpf assignment model, enhanced to manage per-task grading with team submissions.

Grading Implementation

The Assignment model includes the Assessment::Pointable concern to provide per-task point tracking.

Concern/MethodImplementation Detail
Assessment::PointableEnables task-by-task grading with aggregated points
Roster integrationStudents aggregated from all lecture tutorials
Submission requirementrequires_submission: true in the assessment

Example Implementation

class Assignment < ApplicationRecord
  include Assessment::Pointable

  belongs_to :lecture
  has_many :submissions, dependent: :destroy

  after_create :setup_grading

  private

  def setup_grading
    ensure_pointbook!(
      title: title,
      requires_submission: true,
      due_at: deadline
    )
    seed_participations_from_roster!
  end

  def seed_participations_from_roster!
    return unless assessment

    # Aggregate students from all tutorials in the lecture
    lecture.tutorials.each do |tutorial|
      user_ids = tutorial.roster_user_ids
      user_ids.each do |user_id|
        assessment.participations.find_or_create_by!(user_id: user_id) do |part|
          part.tutorial_id = tutorial.id
        end
      end
    end
  end
end

Talk (Enhanced)

A Gradable Target

What it represents

An existing MaMpf talk model, enhanced to record only final grades without task breakdown.

Grading Implementation

The Talk model includes the Assessment::Gradable concern for simple grade recording.

Concern/MethodImplementation Detail
Assessment::GradableRecords final grade only, no tasks
Roster integrationSpeakers come from the talk's roster via Roster::Rosterable
Submission requirementrequires_submission: false and requires_points: false

Example Implementation

class Talk < ApplicationRecord
  include Roster::Rosterable
  include Assessment::Gradable

  after_create :setup_grading

  private

  def setup_grading
    ensure_gradebook!(title: title)
    seed_participations_from_roster!
  end
end

Grading Talks in Seminars

Seminar-Specific Workflow

Talks in seminars follow a different workflow than assignments and exams. Talks are created early (often before the semester starts) for campaign registration, but grading happens much later after presentations are delivered.

Workflow Overview

  1. Talk Creation (Early):

    • Teacher creates talks in the Content tab of a seminar
    • Each talk is created for registration campaign purposes (students sign up for presentation slots)
    • Assessment record is automatically created via after_create :setup_grading hook
    • Participations are seeded from speakers immediately
  2. Campaign & Registration:

    • Students register for talk slots via registration campaign
    • Talks exist with linked assessment records, but no grading yet
  3. Presentation Delivery:

    • Semester progresses, students deliver presentations
    • Assessment records are already in place, ready for grading
  4. Grading (Late):

    • Teacher navigates to Assessments tab in seminar
    • Tab shows read-only list of all talks with inline grading interface
    • Teacher enters final grade directly in the list (no per-task breakdown)
    • Optionally clicks talk title for detailed view to add feedback notes

UI Design for Seminar Assessments

Assessments Tab (Seminar Context):

  • Shows only talks (no assignments or exams in seminars)
  • No "New Assessment" button (talks are created via Content tab)
  • Inline grade input per row for fast grading workflow
  • Columns: Title | Speaker(s) | Grade (inline dropdown) | Status | Actions (view details)
  • Help text: "Talks are created in the Content tab"

Grading UX:

  • One click to focus grade dropdown, one click to save
  • Grade range: 1.0 - 5.0 (German grading scale) or Pass/Fail
  • Auto-save on blur or explicit Save button
  • Click talk title → opens assessment show page for detailed feedback

Why Auto-Create Assessments?

Creating the assessment record early ensures the grading infrastructure is ready when needed. Teachers don't have to remember to "prepare talks for grading" later—the system handles it automatically.

Seminar-Specific Constraints

  • Talks have requires_points: false (no task breakdown)
  • Talks have requires_submission: false (no file uploads for presentations)
  • Assessments tab is read-only for talk creation (Content tab owns talk CRUD)

Exam

A Flexible Gradable Target

See Dedicated Chapter

The Exam model is fully documented in the Exam Model chapter, including registration, grading, and multiple choice exam support. This section provides a brief overview of its assessment integration.

Assessment Integration

The Exam model includes both Assessment::Pointable and Assessment::Gradable concerns for flexible exam grading. The grading mode is configurable per exam instance.

Concern/MethodImplementation Detail
Assessment::PointableOptional: Tracks points per exam question/problem when needed
Assessment::GradableAlways included: Records final grade for transcripts
Assessment::AssessableBase concern linking exam to Assessment::Assessment
Roster integrationStudents come from exam registration via Registration::RegisterableRoster::Rosterable
Submission requirementrequires_submission: false since exams are typically graded in person (or scanned separately)

Grading Modes

With Pointbook (Pointbook + Gradebook):

  • Includes both Assessment::Pointable and Assessment::Gradable
  • Tutors grade per-question/problem points via tasks
  • System computes points_total for each student
  • Staff applies grade scheme to convert points to final grades
  • Use cases: Written exams with detailed point breakdown, oral exams with rubric scoring

Without Pointbook (Gradebook only):

  • Includes only Assessment::Gradable
  • Examiner records final grade directly
  • No per-question breakdown needed
  • No points tracking, just final grade (e.g., "1.0", "2.3")
  • Use cases: Holistic oral exams, pass/fail written exams, interviews

Grading Workflow

With per-question points:

  1. Students register for exam via registration campaign
  2. Campaign materializes → exam roster is populated
  3. After exam is administered, staff creates Assessment::Assessment with requires_points: true
  4. seed_participations_from_roster! creates participation records
  5. Tutors grade per-question points via tasks
  6. System computes points_total for each student
  7. Staff applies grade scheme to convert points to final grades

Without per-question points:

  1. Students register for exam via registration campaign
  2. Campaign materializes → exam roster is populated
  3. Staff creates Assessment::Assessment with requires_points: false
  4. seed_participations_from_roster! creates participation records
  5. Examiner records final grade directly after examination
  6. No point calculation needed

For multiple choice exam support and legal compliance, see Exam Model - Multiple Choice Exams.


Submission (Extended Model)

Team-Capable Graded Work

What it represents

A file or set of files uploaded by one or more students for grading. Supports both individual and team submissions.

Think of it as

"HW3.pdf uploaded by Alice and Bob" or "Problem1.pdf submitted by a team of three students"

Existing Structure

The Submission model already handles team uploads:

Field/AssociationType/KindDescription
assignment_idDB column (FK)The assignment this submission belongs to
tutorial_idDB column (FK)The tutorial context (preserved for performance and historical accuracy)
user_submission_joinsAssociationJoin table linking submission to team members
usersAssociationAll team members who submitted this file
manuscript_dataDB columnUploaded PDF via Shrine
correction_dataDB columnGraded/annotated PDF via Shrine
tokenDB columnUnique identifier for secure access
acceptedDB columnBoolean for late submission approval
invited_user_idsDB columnArray of invited team members

Assessment Integration (Changed)

To integrate with the grading system, the submission structure changes:

Field/AssociationType/KindDescription
assessment_idDB column (FK)Replaces assignment_id: Now links directly to Assessment for generality
tutorial_idDB column (FK)Kept: Provides tutorial context, fast queries, and historical accuracy even if rosters change
task_idDB column (FK)New: Optional link to a specific task for per-task uploads
task_pointsAssociationNew: TaskPoint records created when grading this submission

Rationale for Key Decisions

Why change assignment_id to assessment_id:

  • More general: Enables future support for exam and talk submissions (e.g., scanned answer sheets, presentation files)
  • Decouples submissions from specific domain models
  • Aligns with unified grading architecture

Current Implementation Scope

The model uses assessment_id instead of assignment_id to enable future extensibility. However, the current implementation is limited to assignments only. Submission UI, upload workflows, and grading interfaces exist only for the Assignment type. Support for exam and talk submissions is documented in Future Extensions.

Why keep tutorial_id:

  • Performance: Fast queries for "all submissions in Tutorial X" without user joins
  • Disambiguation: Determines which tutorial grades cross-tutorial teams (edge case)
  • Historical accuracy: Preserves context even if students change tutorials mid-semester

Migration Guide

Overview: Transition existing submissions from assignment_id to assessment_id.

Steps:

  1. Add assessment_id column to submissions table (with foreign key constraint)
  2. Backfill: for each submission, set assessment_id from submission.assignment.assessment.id
  3. Remove the old assignment_id column and its foreign key
  4. Update Submission model:
    • Change belongs_to :assignment to belongs_to :assessment, class_name: "Assessment::Assessment"
    • Update any code that references submission.assignment to use assessment navigation

Consideration: Ensure all assignments have their assessments created before running the backfill migration.

Behavior Highlights

  • Team submissions already work via has_many :users through user_submission_joins
  • One submission can have multiple owners (team members)
  • Optional task_id enables per-task file uploads for granular grading
  • Grading service targets the submission and fans out points to all team members
  • File uploads handled via Shrine for manuscript and correction PDFs
  • Token-based sharing for team formation

Usage Scenarios

  • Team homework submission: Alice, Bob, and Carol form a team for Homework 3. Alice uploads HW3.pdf via the submission interface. The system creates one Submission record linked to all three students via user_submission_joins, then updates each team member's Assessment::Participation record: status: :submitted and submitted_at: Time.current. When a tutor grades this submission, TaskPoint records are created for all three team members with identical points.

  • Per-task uploads (new feature): An assignment allows students to upload separate files for each problem. The team uploads Problem1.pdf with task_id: 1, Problem2.pdf with task_id: 2. Each upload updates the team members' Assessment::Participation.submitted_at timestamp (idempotent if already set). Tutors can grade each problem independently, and the grading service still fans out points to all team members for each task.

  • Audit trail for complaints: A student complains about their grade on Problem 2. The teacher queries the TaskPoint record, follows the submission_id link, and retrieves the original Problem2.pdf file to review the grading decision.

  • Individual submissions: For assignments that don't allow teams, each student uploads their own file. The Submission has only one entry in user_submission_joins, maintaining backward compatibility with the existing single-user flow.


Assessment::SubmissionGrader (Service)

Team-Aware Grading Orchestrator

What it represents

Coordinates the grading workflow: takes one submission and distributes points to all team members automatically.

Think of it as

"Grade the file once, points apply to the whole team."

Public Interface

MethodDescription
grade_task!(submission:, task:, points:, grader:, comment: nil)Grades one task for all team members
grade_tasks!(submission:, grades_by_task_id:, grader:)Bulk grades multiple tasks at once

Behavior Highlights

  • Fan-out pattern: one submission graded → Assessment::TaskPoint created for each team member
  • Idempotent: re-grading the same submission/task overwrites points consistently
  • Links each Assessment::TaskPoint back to the submission_id for audit trail
  • Triggers Assessment::Participation.recompute_points_total! after grading
  • Validates that the task belongs to the submission's assessment
  • Wraps all operations in a database transaction for atomicity
  • Visibility controlled separately via assessment.results_published

Example Implementation

# filepath: app/services/assessment/submission_grader.rb
module Assessment
  class SubmissionGrader
    def grade_task!(submission:, task:, points:, grader:, comment: nil)
      assessment = submission.assessment
      raise ArgumentError, "Task not in assessment" unless
        task.assessment_id == assessment.id

      member_ids = submission.users.pluck(:id)
      parts = assessment.participations.where(user_id: member_ids)

      ApplicationRecord.transaction do
        parts.find_each do |part|
          tp = Assessment::TaskPoint.find_or_initialize_by(
          assessment_participation_id: part.id,
          task_id: task.id
        )
        tp.points = points
        tp.grader = grader
        tp.comment = comment if comment.present?
        tp.submission_id = submission.id
        tp.save!
      end
      parts.find_each(&:recompute_points_total!)
    end
  end

  def grade_tasks!(submission:, grades_by_task_id:, grader:)
    Task.where(id: grades_by_task_id.keys).find_each do |t|
      grade_task!(
        submission: submission,
        task: t,
        points: grades_by_task_id[t.id],
        grader: grader
      )
    end
  end
end

Usage Scenarios

  • Grading a team homework: A tutor grades Problem 1 of a submission by Alice, Bob, and Carol. They call Assessment::SubmissionGrader.new.grade_task!(submission: sub, task: problem1, points: 8, grader: tutor). The service creates three Assessment::TaskPoint records (one per team member), each with 8 points and linked to the same submission. Each team member's Assessment::Participation.points_total is updated.

  • Bulk grading all tasks: After reviewing the entire submission, the tutor calls service.grade_tasks!(submission: sub, grades_by_task_id: { 1 => 8, 2 => 12, 3 => 5 }, grader: tutor). The service iterates through each task and fans out points, updating all participations in a single transaction.

  • Re-grading after complaint: A student complains about Problem 2. The tutor reviews and agrees, calling grade_task! again with updated points. The existing Assessment::TaskPoint records are overwritten (upsert), and totals are recomputed. The audit trail via submission_id remains intact.

  • Publishing results: Tutors grade all submissions. Once grading is complete, the teacher calls assessment.update!(results_published: true), making all points visible to students at once.


ERD

erDiagram
    Assessment ||--o{ Participation : "has many"
    Assessment ||--o{ Task : "has many"
    Assessment }o--|| Assessable : "belongs to (polymorphic)"

    Participation ||--o{ TaskPoint : "has many"
    Participation }o--|| User : "belongs to"
    Participation }o--|| Assessment : "belongs to"

    Task ||--o{ TaskPoint : "has many"
    Task }o--|| Assessment : "belongs to"

    TaskPoint }o--|| Participation : "belongs to"
    TaskPoint }o--|| Task : "belongs to"
    TaskPoint }o--|| Submission : "belongs to (optional)"
    TaskPoint }o--|| User : "graded by (optional)"

    Submission ||--o{ TaskPoint : "generates (optional)"
    Submission ||--o{ UserSubmissionJoin : "has many"
    Submission }o--|| Assessment : "belongs to"
    Submission }o--|| Tutorial : "belongs to"
    Submission }o--|| Task : "for specific task (optional)"

    UserSubmissionJoin }o--|| Submission : "belongs to"
    UserSubmissionJoin }o--|| User : "belongs to"

    Assignment ||--|| Assessment : "assessable"
    Exam ||--|| Assessment : "assessable"
    Talk ||--|| Assessment : "assessable"

Sequence Diagram: Assessment Creation & Submission Workflow

sequenceDiagram
    actor Teacher
    participant A as Assignment
    participant Assess as Assessment::Assessment
    participant L as Lecture
    participant Tut as Tutorial
    participant Part as Assessment::Participation
    actor Student
    participant Sub as Submission

    Teacher->>A: Create assignment
    A->>Assess: ensure_pointbook!(title, requires_submission: true)
    Assess->>Assess: Create/update assessment record
    Assess-->>A: Assessment created

    Teacher->>A: seed_participations_from_roster!
    A->>L: lecture.tutorials
    L-->>A: [tutorial_1, tutorial_2, ...]

    loop For each tutorial
        A->>Tut: tutorial.roster_user_ids
        Tut-->>A: [student_ids...]
    end

    A->>A: user_ids.uniq (deduplicate)

    loop For each unique student
        A->>Part: Create participation
        Part->>Part: Set status: :not_started
        Part->>Part: Set points_total: 0
    end

    A-->>Teacher: Participations seeded

    Teacher->>Assess: Add tasks (Problem 1, Problem 2, ...)
    Assess->>Assess: Create Assessment::Task records

    Note over Teacher,Assess: Assessment is now ready for student work

    Student->>Sub: Upload homework file
    Sub->>Sub: Create submission record
    Sub->>Sub: Link to team members via user_submission_joins

    loop For each team member
        Sub->>Part: Find participation by user_id
        Part->>Part: Update status: :submitted
        Part->>Part: Set submitted_at: Time.current
    end

    Sub-->>Student: Submission confirmed

    Note over Student,Part: Participations track submission history<br/>even after grading

Sequence Diagram: Team Grading Workflow

sequenceDiagram
    actor Tutor
    participant UI as Grading UI
    participant SG as Assessment::SubmissionGrader
    participant Sub as Submission
    participant Part as Assessment::Participation
    participant TP as Assessment::TaskPoint

    Tutor->>UI: Select submission for Problem 1
    UI->>Sub: Fetch team members
    Sub-->>UI: [Alice, Bob, Carol]

    Tutor->>UI: Enter points: 8/10
    UI->>SG: grade_task!(submission, task, 8, tutor)

    SG->>Sub: submission.assessment
    Sub-->>SG: Assessment
    SG->>Sub: submission.users.pluck(:id)
    Sub-->>SG: [user_id_1, user_id_2, user_id_3]

    SG->>Part: Find participations for team members
    Part-->>SG: [participation_1, participation_2, participation_3]

    rect rgb(240, 248, 255)
        Note over SG,TP: Database Transaction

        loop For each team member
            SG->>TP: find_or_initialize_by(participation, task)
            TP-->>SG: TaskPoint instance
            SG->>TP: Update points, grader, submission_id
            SG->>TP: save!
        end

        loop For each participation
            SG->>Part: recompute_points_total!
            Part->>TP: sum(:points)
            TP-->>Part: Updated total
            Part->>Part: update!(points_total)
        end
    end

    SG-->>UI: Grading complete
    UI-->>Tutor: Show success confirmation

    Note over Tutor,TP: Points visible when<br/>assessment.results_published = true

State Diagram: Assessment Status Transitions

stateDiagram-v2
    [*] --> draft: Assessment created

    draft --> open: Teacher opens for students
    draft --> archived: Cancelled before opening

    open --> closed: Due date passed / manually closed

    closed --> graded: All participations graded
    closed --> open: Reopened (deadline extended)

    graded --> archived: Semester ends
    graded --> closed: Re-opened for re-grading

    archived --> [*]

    note right of draft
        Teacher configures tasks,
        not visible to students
    end note

    note right of open
        Students can view/submit,
        results_published: false
    end note

    note right of closed
        No more submissions,
        grading in progress
    end note

    note right of graded
        All graded,
        results can be published
    end note

Proposed Folder Structure

app/
├── models/
│   ├── assessment/
│   │   ├── assessment.rb
│   │   ├── participation.rb
│   │   ├── task.rb
│   │   ├── task_point.rb
│   │   ├── assessable.rb
│   │   ├── pointable.rb
│   │   └── gradable.rb
│   │
│   ├── assignment.rb           # includes Assessment::Pointable
│   ├── talk.rb                 # includes Assessment::Gradable
│   ├── exam.rb                 # includes both concerns + Registration + Roster
│   └── submission.rb           # extended with assessment_id
│
└── services/
    └── assessment/
        └── submission_grader.rb

Key Files:

  • Models: app/models/assessment/ contains all namespaced models
  • Concerns: Assessable, Pointable, Gradable live within the namespace
  • Services: app/services/assessment/submission_grader.rb handles team grading
  • Enhanced Models: Assignment, Talk, Exam include the assessment concerns
  • Migrations: Will include changes to add assessment_id to submissions table

Database Tables

The following tables support the assessment system:

Table NameNamespace ModelPurpose
assessmentsAssessment::AssessmentGradebook containers for graded work
assessment_participationsAssessment::ParticipationPer-student grade records
assessment_tasksAssessment::TaskGraded components within assessments
assessment_task_pointsAssessment::TaskPointPer-student, per-task points
submissionsSubmissionExisting model, extended with assessment_id

Naming rationale: Namespaced table names follow Rails conventions and prevent collisions with potential future models (e.g., Quiz::Task, Exercise::Task).

Schema Updates for Per-Tutorial Publication

New columns for assessment_participations:

# filepath: db/migrate/20250105000000_add_tutorial_and_publication_to_participations.rb
class AddTutorialAndPublicationToParticipations < ActiveRecord::Migration[7.0]
  def change
    add_reference :assessment_participations, :tutorial,
      foreign_key: true, null: true, index: true
    add_column :assessment_participations, :results_published_at,
      :datetime, null: true
    add_index :assessment_participations, :results_published_at
  end
end

Migration rationale:

  • tutorial_id: Nullable to support exams and talks without tutorial context
  • results_published_at: Enables per-tutorial publication for assignments
  • Indexed for fast tutorial-scoped queries and publication status checks
  • Foreign key constraint maintains referential integrity

Backfill strategy for existing data:

# For existing assignment participations, backfill tutorial_id from submission
Submission.includes(:users, :tutorial).find_each do |sub|
  sub.users.each do |user|
    participation = Assessment::Participation.find_by(
      assessment_id: sub.assessment_id,
      user_id: user.id
    )
    participation&.update_column(:tutorial_id, sub.tutorial_id)
  end
end

Exam Model

What is an 'Exam'?

An exam is a scheduled assessment event where students demonstrate their knowledge under controlled conditions.

  • Common Examples: "Final Exam Linear Algebra", "Midterm Calculus", "Retake Exam Analysis"
  • In this context: A new domain model that acts as a registration target (students sign up for exam slots), manages rosters (tracking who is registered), and links to the assessment system for grading. Exams belong to a lecture.

Problem Overview

MaMpf needs a formal representation of exams that can:

  • Act as a registration target with capacity limits and eligibility checks (see Student Performance)
  • Track which students are registered for which exam dates/locations
  • Link to the assessment system for grading
  • Support multiple exam dates per lecture (e.g., Hauptklausur, Nachklausur, Wiederholungsklausur)

Solution Architecture

We introduce a new Exam model that:

  • Belongs to a Lecture: Each exam is scoped to a specific lecture offering
  • Implements Registration::Registerable: Acts as a registration target (students register for the exam)
  • Implements Roster::Rosterable: Manages the list of registered students
  • Implements Assessment::Assessable: Links to an Assessment::Assessment for grading

The parent Lecture (which implements Registration::Campaignable) hosts the registration campaigns. Each exam (Hauptklausur, Nachklausur, etc.) gets its own campaign with that exam as the sole registerable item.


Exam (ActiveRecord Model)

What it represents

A scheduled exam event with date, location, capacity, and registration deadline.

Think of it as

The exam equivalent of a Tutorial—it's both a thing students register for and a thing that gets graded.

Key Attributes

FieldTypeDescription
lecture_idFKThe lecture this exam belongs to (required)
titleStringExam title (e.g., "Hauptklausur", "Nachklausur")
dateDateTimeScheduled exam date and time
locationStringPhysical location or online meeting link
capacityIntegerMaximum number of exam participants (nullable; nil = unlimited)
descriptionTextAdditional exam details and instructions

Role in the System

1. As Registerable (Registration Target)

# The parent lecture hosts the campaign
lecture = Lecture.find(123)
campaign = lecture.registration_campaigns.create!(
  title: "Hauptklausur Registration",
  allocation_mode: :first_come_first_served,
  registration_deadline: 2.weeks.from_now
)

# The exam is the sole registerable item
exam = lecture.exams.create!(
  title: "Hauptklausur",
  date: 3.weeks.from_now,
  capacity: 200
)
campaign.registration_items.create!(registerable: exam)

2. As Rosterable (Student Tracking)

# After allocation, students are materialized into the exam roster
exam.roster_user_ids # => [101, 102, 103, ...]

3. As Assessable (Grading Container)

# After the exam, link it to an assessment for grading
assessment = Assessment::Assessment.create!(
  assessable: exam,
  lecture: exam.lecture,
  title: "#{exam.title} Grading"
)

Example Implementation

class Exam < ApplicationRecord
  belongs_to :lecture

  include Registration::Registerable
  include Roster::Rosterable
  include Assessment::Assessable

  validates :lecture, presence: true
  validates :title, presence: true
  validates :date, presence: true
  validates :capacity, numericality: { greater_than: 0, allow_nil: true }

  def materialize_allocation!(user_ids:, campaign:)
    replace_roster!(
      user_ids: user_ids,
      source_type: "Registration::Campaign",
      source_id: campaign.id
    )
  end

  def registration_open?
    Time.current < registration_deadline
  end

  def past?
    date < Time.current
  end
end

Database Migration

class CreateExams < ActiveRecord::Migration[7.0]
  def change
    create_table :exams do |t|
      t.references :lecture, null: false, foreign_key: true
      t.string :title, null: false
      t.datetime :date, null: false
      t.string :location
      t.integer :capacity, null: false
      t.datetime :registration_deadline
      t.text :description

      t.timestamps
    end

    add_index :exams, [:lecture_id, :date]
  end
end

Multiple Choice Exam Extension

For exams that include multiple choice components requiring legal compliance, see the Multiple Choice Exams chapter. That extension adds has_multiple_choice and mc_weight fields to the schema.


Exam Registration Flow

Goal

Enable students to register for an exam slot while enforcing eligibility and capacity constraints.

Eligibility Requirement

Exam registration typically requires students to meet certain criteria (e.g., earning 50% of homework points). This is handled by the student performance certification system documented in Student Performance. The eligibility check is enforced via a Registration::Policy with kind: :student_performance.

Setup (Staff Actions)

StepActionTechnical Details
1Create examlecture.exams.create!(title: "Hauptklausur", date: ..., capacity: 150)
2Create campaignlecture.registration_campaigns.create!(...) (lecture as campaignable)
3Create itemcampaign.registration_items.create!(registerable: exam)
4Add eligibility policycampaign.registration_policies.create!(kind: :student_performance) - see Student Performance
5Create certificationsTeacher creates StudentPerformance::Certification records for eligible students (see Student Performance)
6Pre-flight checkBefore opening, verify all active users have certifications (see End-to-End Workflow Phase 7)
7Finalization filteringOn finalize, only allocate students with Certification.status IN (:passed, :forced_passed)
Preconditionslecture.performance_total_points must be set; certifications must exist for all active lecture users

Student Experience

  1. Student visits exam registration campaign page
  2. System checks eligibility via Registration::PolicyEngine (queries StudentPerformance::Certification.status)
  3. If ineligible, student sees error message explaining why (e.g., "Certification pending" or "Certification failed")
  4. If eligible (status IN passed/forced_passed), student sees registration interface
  5. Student submits registration
  6. Registration is confirmed immediately (FCFS) or after deadline (preference-based, if multiple exam dates)
  7. After registration closes, materialize_allocation! updates exam roster (allocation filtered to only certified students)

Exam Grading Flow

Goal

Record and process exam grades using the assessment system.

After Exam is Administered

StepActionTechnical Details
1Create assessmentAssessment::Assessment.create!(assessable: exam, ...)
2Seed participationsSystem creates Assessment::Participation for each registered student
3Define tasksStaff creates Assessment::Task records (e.g., Problem 1, Problem 2)
4Enter gradesTutors record Assessment::TaskPoint for each student/task
5Apply grade schemeStaff applies GradeScheme::Scheme to convert points to letter grades

Multiple Choice Exam Extension

For exams with multiple choice components requiring legal compliance, see the Multiple Choice Exams chapter for the two-stage grading process.


Usage Scenarios

Scenario 1: Regular Final Exam

exam = lecture.exams.create!(
  title: "Final Exam",
  date: Date.new(2025, 2, 15),
  location: "Main Hall",
  capacity: 200,
  registration_deadline: Date.new(2025, 2, 1)
)

campaign = exam.registration_campaigns.create!(
  title: "Final Exam Registration",
  allocation_mode: :first_come_first_served,
  registration_deadline: exam.registration_deadline
)

campaign.registration_policies.create!(
  kind: :student_performance,
  config: { lecture_id: lecture.id }
)

# Teacher creates certifications for eligible students
lecture.active_users.find_each do |user|
  evaluator = StudentPerformance::Evaluator.new(lecture: lecture, user: user)
  proposal = evaluator.proposal

  StudentPerformance::Certification.create!(
    lecture: lecture,
    user: user,
    status: proposal[:status],  # :passed or :failed
    rule_snapshot: proposal[:rule_snapshot],
    notes: proposal[:notes]
  )
end

# Pre-flight check before opening
campaign.validate_certifications!  # raises if missing certifications

Scenario 2: Multiple Exam Dates (Regular + Retake)

regular_exam = lecture.exams.create!(
  title: "Regular Exam",
  date: Date.new(2025, 2, 15),
  capacity: 200
)

retake_exam = lecture.exams.create!(
  title: "Retake Exam",
  date: Date.new(2025, 3, 15),
  capacity: 50
)

campaign = lecture.registration_campaigns.create!(
  title: "Exam Date Selection",
  allocation_mode: :preference_based
)

campaign.registration_items.create!(registerable: regular_exam)
campaign.registration_items.create!(registerable: retake_exam)

State Diagram

stateDiagram-v2
    [*] --> Created
    Created --> RegistrationOpen : registration_deadline not reached
    RegistrationOpen --> RegistrationClosed : deadline passed
    RegistrationClosed --> Administered : exam date reached
    Administered --> Graded : grades entered
    Graded --> [*]

Proposed File Structure

app/
└── models/
    └── exam.rb

Student Performance

What is 'Student Performance'?

A student performance system tracks and materializes student achievement across all coursework in a lecture for multiple purposes.

  • Common Examples: "Alice earned 80% of homework points", "Bob completed 2 presentations"
  • In this context: A unified system that materializes student performance data (points, achievements) for use in dashboards, exam registration policies, certificates, and early intervention.

Problem Overview

After coursework and achievements are recorded, MaMpf needs to:

  • Enforce prerequisites: Prevent unqualified students from being finalized on exam rosters.
  • Support flexible criteria: Combine point thresholds, achievement counts, and custom rules.
  • Materialize results: Store computed performance data to avoid expensive queries during registration page loads.
  • Guarantee correctness: Recompute performance data on demand to ensure decisions use fresh facts.
  • Allow teacher certification: Let teachers confirm pass/fail (with manual overrides) with an audit trail.
  • Trigger recomputation: Update materialized data when coursework grades change or policies are updated.
  • Integrate with registration: Work seamlessly with the Registration::Policy system.

Solution Architecture

We use a factual materialization + teacher certification + phased policy checks:

  • Factual Source: StudentPerformance::Record stores materialized performance data per (lecture, user). It contains facts only, not interpretations.
  • Not a Cache: This is an authoritative data snapshot for reads. Correctness is ensured by just-in-time recomputation in critical flows.
  • Teacher Certification: StudentPerformance::Certification captures teacher-declared status (pending, passed, failed) with audit fields and a snapshot of the rule used when certifying. Manual overrides are encoded as source: :manual.
  • Policy Phases: Registration::Policy entries are evaluated by phase: registration, finalization, or both. Enforcement happens only if a policy is configured for that phase. Policies check Certification status at runtime once certifications are complete.
  • Service-Based Computation: StudentPerformance::Service aggregates points and achievements and upserts the factual StudentPerformance::Record.
  • Evaluator (Teacher Tool): StudentPerformance::Evaluator is a teacher-facing tool that interprets factual records to generate bulk certification proposals and show rule change impact. It is never called during registration/finalization runtime.
  • Achievement Tracking: A top-level Achievement model records qualitative accomplishments (e.g., blackboard presentations).
  • Recomputation Triggers: Background jobs and on-demand triggers keep the data fresh and guarantee correctness.
  • Audit Trail: Certification provides the authoritative decision and audit (who, when, source, rule snapshot). The Record stays facts-only.

StudentPerformance::Record (ActiveRecord Model)

Materialized Performance Snapshot

What it represents

A materialized database record of a student's performance in a specific lecture, computed from their coursework and achievements. This record contains only facts (points, achievements met) and does not store an interpretation like 'eligible' or 'passed'.

Its purpose is to provide a high-performance data source for read-heavy operations (like dashboards) and to serve as an auditable snapshot of performance at a specific point in time. Correctness for critical operations is guaranteed by just-in-time recomputation.

Think of it as

"As of today, Alice has earned 58% of homework points and completed 2 presentations for the Linear Algebra lecture."

Performance Records vs Exam Roster

StudentPerformance::Record covers all lecture students (e.g., 150 students).

The exam roster (materialized after registration) contains only students who successfully registered (e.g., 85 of 126 eligible students).

These are two distinct lists serving different purposes:

  • Performance records: Track achievement for dashboards, certificates, and eligibility verification
  • Exam roster: Operational list for exam administration and grading

The main fields and methods of StudentPerformance::Record are:

Name/FieldType/KindDescription
lecture_idDB column (FK)The lecture this performance record applies to
user_idDB column (FK)The student whose performance is materialized
points_total_materializedDB columnSum of relevant assessment points at computation time
points_max_materializedDB columnMaximum possible points from graded assessments at computation time
percentage_materializedDB columnComputed percentage (points_total / points_max)
achievements_met_idsDB column (JSONB)Optional list of achievement IDs currently met (factual audit)
computed_atDB columnTimestamp of last computation

Behavior Highlights

  • Enforces uniqueness per (lecture, user) via database constraint.
  • Contains only factual data; interpretation is handled by Evaluator and teacher certification.
  • Re-computation updates materialized values and computed_at.

Example Implementation

module StudentPerformance
  class Record < ApplicationRecord
    self.table_name = "student_performance_records"

    belongs_to :lecture
    belongs_to :user

    validates :lecture_id, uniqueness: { scope: :user_id }
  end
end

Dispatcher difference

The Registration::Policy#evaluate in the Registration chapter uses a case dispatch to delegate to eval_exam, eval_email, etc. This chapter focuses only on the exam (student performance) branch and shows its internal logic. For the canonical dispatcher, see Registration → Registration::Policy and the eval_exam note that points back here.

Usage Scenarios

  • After coursework completion: A background job runs StudentPerformance::Service.new(lecture: ...).compute_and_upsert_all_records!. Alice's record is created with points_total_materialized: 58, percentage_materialized: 58. The record itself does not say if she passed.

  • Teacher certification workflow: The teacher opens the Certification UI, which uses the Evaluator to generate proposals for all students. The teacher reviews and creates Certification rows (pending/passed/failed). Manual edge cases are set with source: :manual.

  • Registration runtime: Bob tries to register for the exam. The system:

    1. Recomputes Bob's record just-in-time to ensure facts are current
    2. Checks if Bob has a Certification with status: :passed
    3. Allows or blocks registration based on certification status
  • Finalization runtime: Before materializing the exam roster, the system checks that all confirmed registrants have Certification with status: :passed. Any missing/pending/failed certifications block finalization and trigger remediation UI.


Achievement (ActiveRecord Model)

Qualitative Student Accomplishments (Assessable Type)

What it represents

An assessable type that tracks qualitative student accomplishments during a lecture (e.g., blackboard presentations, discussion participation). Unlike assignments or exams, achievements can be boolean (pass/fail), numeric (count-based), or percentage-based. They integrate with the Assessment infrastructure for participation tracking and tutor grading workflows.

Think of it as

"Blackboard Presentation Achievement (boolean pass/fail)", "Attendance Achievement (numeric: 12 of 15)", "Lab Participation Achievement (percentage: 80%)"

Key Fields & Associations

Name/FieldType/KindDescription
lecture_idDB column (FK)The lecture this achievement belongs to
titleDB columnHuman-readable name (e.g., "Blackboard Presentation")
value_typeDB column (Enum)How achievement is measured: boolean, numeric, percentage
thresholdDB columnRequired value for completion (nil for boolean, count for numeric, percentage for percentage)
descriptionDB column (Text)Optional explanation shown to students
rule_achievementsAssociationHas many StudentPerformance::RuleAchievement (join table)
assessmentAssociationHas one Assessment::Assessment (polymorphic assessable)

Value Types

TypeThreshold MeaningParticipation Grade EncodingExample
booleanNot used (always pass/fail)"Pass" or "Fail"Blackboard presentation (yes/no)
numericRequired countInteger count as grade_valueAttendance: 12 of 15
percentageRequired percentage (0-100)Percentage as grade_valueLab participation: 75%

Behavior Highlights

  • Assessable Integration: Each Achievement has one Assessment::Assessment record where assessable_type = "Achievement" and assessable_id = achievement.id
  • Participation Seeding: When created, participations are seeded for all students in the lecture roster
  • Tutor Grading: Tutors mark achievement completion via existing Assessment::Participation editing UI:
    • Boolean: Check/uncheck "Completed" → sets grade_value: "Pass"/"Fail"
    • Numeric: Enter count → sets grade_value: <count>
    • Percentage: Enter percentage → sets grade_value: <percentage>
  • No Tasks/Submissions: Achievements do not use Assessment::Task (no per-task breakdown) and do not require file uploads (requires_submission: false)
  • Eligibility Checking: StudentPerformance::Service reads participation grade_value to determine if student meets threshold
  • Deletion Protection: Cannot delete achievement if referenced by any rule (dependent: :restrict_with_error). Database FK constraint provides additional layer (on_delete: :restrict)

Example Implementation

class Achievement < ApplicationRecord
  include Assessment::Assessable

  belongs_to :lecture
  has_many :rule_achievements,
           class_name: "StudentPerformance::RuleAchievement",
           dependent: :restrict_with_error

  enum value_type: { boolean: 0, numeric: 1, percentage: 2 }

  validates :lecture_id, :value_type, :title, presence: true
  validates :threshold, numericality: { greater_than: 0 }, if: -> { numeric? || percentage? }
  validates :threshold, absence: true, if: :boolean?

  after_create :create_assessment_infrastructure

  def create_assessment_infrastructure
    ensure_assessment!(
      title: title,
      requires_points: false,
      requires_submission: false
    )
    seed_participations_from_roster!
  end

  def seed_participations_from_roster!
    # Override from Assessment::Assessable concern
    # Achievement roster = all lecture students
    assessment.seed_participations_from!(user_ids: lecture.students.pluck(:id))
  end

  def student_met_threshold?(user)
    participation = assessment.participations.find_by(user: user)
    return false unless participation&.grade_value.present?

    case value_type
    when "boolean"
      participation.grade_value == "Pass"
    when "numeric"
      participation.grade_value.to_i >= threshold
    when "percentage"
      participation.grade_value.to_f >= threshold
    end
  end
end

Usage Scenarios

  • Teacher creates achievement: Navigate to Lecture → Assessments → New Assessment → select "Achievement". Enter title ("Blackboard Presentation"), choose value_type ("boolean"). System creates Achievement + Assessment + Participations for all students.

  • Tutor marks completion: In tutorial roster view, tutor sees participation list for "Blackboard Presentation" achievement. Checks box next to Emma's name → participation.grade_value = "Pass".

  • Eligibility computation: StudentPerformance::Service calls achievement.student_met_threshold?(emma) which checks if Emma's participation has grade_value: "Pass".


StudentPerformance::Rule (ActiveRecord Model)

Eligibility Criteria Configuration

What it represents

A configuration record that defines the criteria a student must meet to be eligible for an exam. Each lecture has at most one rule that is evaluated to determine eligibility.

Think of it as

"To be eligible for the Linear Algebra exam, you need 50% homework points AND 1 blackboard presentation"

Key Fields & Associations

Name/FieldType/KindDescription
lecture_idDB column (FK)The lecture this rule applies to
min_percentageDB columnMinimum percentage of points (0-100), mutually exclusive with min_points_absolute
min_points_absoluteDB columnMinimum absolute points, mutually exclusive with min_percentage
activeDB column (Bool)Whether this rule is currently in effect
rule_achievementsAssociationJoin records linking to required achievements
required_achievementsAssociationAchievement records that must be completed (via rule_achievements)

Behavior Highlights

  • Stored as a database record (not just JSONB config) for better querying and validation
  • One lecture can have one active rule at a time
  • References multiple achievements via join table (student_performance_rule_achievements)
  • Database-level integrity prevents deletion of achievements still referenced by rules
  • Enforces mutual exclusivity of percentage vs absolute point thresholds
  • Points are aggregated from all assignments of the lecture (no filtering by type or archived status)

Example Implementation

module StudentPerformance
  class Rule < ApplicationRecord
    self.table_name = "student_performance_rules"

    belongs_to :lecture
    has_many :rule_achievements,
             class_name: "StudentPerformance::RuleAchievement",
             dependent: :destroy
    has_many :required_achievements,
             through: :rule_achievements,
             source: :achievement

    validates :lecture_id, presence: true
    validates :min_percentage, numericality: { greater_than_or_equal_to: 0, less_than_or_equal_to: 100 }, allow_nil: true
    validates :min_points_absolute, numericality: { greater_than_or_equal_to: 0 }, allow_nil: true
    validate :percentage_or_absolute_not_both

    private

    def percentage_or_absolute_not_both
      if min_percentage.present? && min_points_absolute.present?
        errors.add(:base, "Cannot specify both percentage and absolute point threshold")
      end
    end
  end
end

module StudentPerformance
  class RuleAchievement < ApplicationRecord
    self.table_name = "student_performance_rule_achievements"

    belongs_to :rule, class_name: "StudentPerformance::Rule"
    belongs_to :achievement

    validates :rule_id, uniqueness: { scope: :achievement_id }
    validates :position, presence: true

    acts_as_list scope: :rule  # For ordering in UI
  end
end

Usage Scenarios

  • Professor sets up rule: Teacher first creates achievements:

    presentation = Achievement.create!(lecture: linear_algebra, title: "Blackboard Presentation", value_type: :boolean)
    attendance = Achievement.create!(lecture: linear_algebra, title: "Lab Attendance", value_type: :numeric, threshold: 12)
    

    Then creates rule and associates achievements:

    rule = StudentPerformance::Rule.create!(lecture: linear_algebra, min_percentage: 50)
    rule.required_achievements << [presentation, attendance]
    
  • Mid-semester adjustment: Professor realizes 50% is too strict, updates: rule.update!(min_percentage: 45). System triggers recomputation for all students.

  • Adding achievement to rule: Professor adds new requirement: rule.required_achievements << bonus_achievement. Join table automatically creates relationship.

  • Preventing achievement deletion: Teacher tries to delete achievement used in rule: presentation.destroy raises ActiveRecord::InvalidForeignKey. UI shows: "Cannot delete - used in 2 performance rules".

  • Service uses rule: The computation service loads the active rule: rule = StudentPerformance::Rule.find_by(lecture: lecture, active: true) and accesses rule.required_achievements for evaluation. Points are aggregated from all lecture assignments.


StudentPerformance::Service (Service Object)

Performance Computer

What it represents

A service that computes a student's performance by aggregating assessment points and checking achievements. It upserts these facts into a materialized StudentPerformance::Record.

Think of it as

The "performance calculator" that gathers all the data and stamps it into a student's performance file.

Public Interface

MethodPurpose
initialize(lecture:)Sets up the service with the lecture whose rule will be used.
compute_and_upsert_record_for(user)Computes performance for a single user and upserts their StudentPerformance::Record. Returns the fresh record.
compute_and_upsert_all_records!Computes performance for all students in the lecture.

Behavior Highlights

  • Batch or targeted: Can compute for all users or a specific subset.
  • Idempotent: Running twice with the same inputs produces the same factual record.
  • Factual updates only: The service is responsible for creating/updating the materialized facts, not for interpreting them.

Recomputation Triggers

The service is invoked in several scenarios to keep performance records accurate:

  1. After coursework grading: A background job can trigger a full recomputation.
  2. After achievement changes: When tutors record or correct lecture achievements for a user.
  3. Just-in-Time: The Registration::Policy triggers a recomputation for a single user at the moment of an exam registration attempt to guarantee 100% correctness.
  4. On-demand by staff: Manual trigger via an admin interface for debugging or corrections.

Example Implementation

module StudentPerformance
  class Service
    def initialize(lecture:)
      @lecture = lecture
      @rule = lecture.student_performance_rule
    end

    def compute_and_upsert_record_for(user)
      points_data = aggregate_points(user)

      met_ids = @rule.required_achievements.select do |achievement|
        achievement.student_met_threshold?(user)
      end.map(&:id)

      record_data = {
        lecture_id: @lecture.id,
        user_id: user.id,
        points_total_materialized: points_data[:total],
        points_max_materialized: points_data[:max],
        percentage_materialized: points_data[:percentage],
        achievements_met_ids: met_ids,
        computed_at: Time.current
      }

      StudentPerformance::Record.upsert(record_data, unique_by: [:lecture_id, :user_id])
      StudentPerformance::Record.find_by(lecture_id: @lecture.id, user_id: user.id)
    end

    def compute_and_upsert_all_records!
      @lecture.students.find_each do |user|
        compute_and_upsert_record_for(user)
      end
    end

    private

    def aggregate_points(user)
      # Implementation aggregates assessment points based on rule configuration
      # Returns hash with :total, :max, :percentage
    end
  end
end

StudentPerformance::Evaluator (Service Object)

Teacher-Facing Proposal Generator

What it represents

A teacher-facing tool that interprets factual StudentPerformance::Record entries against a StudentPerformance::Rule to generate bulk certification proposals. Used exclusively in the teacher UI for bulk certification workflows and rule change impact analysis. Never called during student registration or finalization runtime.

Think of it as

The "proposal calculator" for teachers: shows which students would pass/fail based on current rules, but doesn't make authoritative decisions (that's Certification's job).

Public Interface

MethodPurpose
initialize(rule)Sets up the evaluator with the rule to be used for interpretation.
evaluate(record)Evaluates a single StudentPerformance::Record and returns a structured proposal.
bulk_evaluate(records)Convenience method to evaluate multiple records at once for UI display.

Behavior Highlights

  • Teacher-only tool: Used in Certification UI and rule editing workflows
  • No runtime gating: Never called by Registration::Policy during registration/finalization
  • Proposal generator: Outputs are suggestions for teachers, not authoritative decisions
  • Rule change preview: Shows impact when teacher edits thresholds (50% → 45%)

Usage Contexts

Where Evaluator IS used:

  • Bulk Certification UI: "Generate proposals for all students"
  • Rule Edit Modal: "Preview: 12 students would change from failed → passed"
  • Teacher Dashboard: "23 students currently meet requirements"

Where Evaluator is NOT used:

  • Student registration attempts (Policy checks Certification directly)
  • Finalization guards (Policy requires Certification=passed)
  • Any automated student-facing flows

Example Implementation

module StudentPerformance
  class Evaluator
    Result = Struct.new(:proposed_status, :details, keyword_init: true)

    def initialize(rule)
      @rule = rule
    end

    def evaluate(record)
      return Result.new(proposed_status: :failed, details: {}) unless record

      req_pts = required_points(@rule)
      meets_points = req_pts.nil? || record.points_total_materialized.to_i >= req_pts
      meets_achievements = includes_all?(
        record.achievements_met_ids,
        @rule.required_achievements.pluck(:id)
      )
      proposed = (meets_points && meets_achievements) ? :passed : :failed

      Result.new(
        proposed_status: proposed,
        details: {
          current_points: record.points_total_materialized,
          required_points: req_pts,
          current_achievement_ids: record.achievements_met_ids,
          required_achievement_ids: @rule.required_achievements.pluck(:id)
        }
      )
    end

    def bulk_evaluate(records)
      records.map { |record| [record, evaluate(record)] }.to_h
    end

    private

    def required_points(rule)
      return rule.min_points_absolute if rule.min_points_absolute.present?
      return nil unless rule.min_percentage.present?

      total = rule.lecture.assignments.sum(:max_points)
      (total * rule.min_percentage / 100.0).ceil
    end

    def includes_all?(have_ids, need_ids)
      return true if need_ids.blank?
      have = Array(have_ids).map(&:to_i).to_set
      need = Array(need_ids).map(&:to_i).to_set
      have >= need
    end
  end
end

StudentPerformance::Certification (ActiveRecord Model)

Teacher-declared pass/fail with audit

What it represents

An authoritative teacher decision per (lecture, user) with a status lifecycle and audit fields. Can be created early as pending, then resolved to passed/failed. Manual overrides are encoded as source: :manual.

Main fields & associations

FieldTypeDescription
lecture_idFKLecture the certification belongs to
user_idFKStudent being certified
statusEnumpending, passed, failed
sourceEnumcomputed (from evaluator) or manual (teacher override)
certified_by_idFK UserWho set a non-pending certification
certified_atDateTimeWhen it was set (non-null unless pending)
rule_idFK (optional)Rule in effect when set (may be null if rule deleted)
noteTextOptional human note

Uniqueness: one certification per (lecture_id, user_id).

Behavior highlights

  • Default status: New certifications created as pending during bulk generation.
  • Bulk proposal workflow: Teacher uses Evaluator UI to generate proposals; reviews and accepts/modifies; creates Certification rows with source: :computed.
  • Manual overrides: Teacher can create/update certifications with source: :manual for special cases (medical exemption, etc.).
  • Pre-flight validation (completeness check):
    • Registration phase: When campaign with registration-phase student_performance policy is saved, warn if certifications are missing. On campaign open, hard-fail if any certifications are missing/pending.
    • Finalization phase: When campaign with finalization-phase student_performance policy finalizes, hard-fail if any confirmed registrants have missing/pending/failed certifications. Show remediation UI.
  • Auto-reject at finalization: Students with status: :failed are automatically moved to rejected status during finalization (if finalization-phase policy exists).
  • Rule change handling: When teacher edits rule thresholds, show diff modal with:
    • Computed certifications that would flip (failed → passed or vice versa)
    • Manual certifications that conflict with new proposal
    • Teacher reviews and applies changes manually via modal
    • No automatic updates to Certification table; teacher must confirm

Example (conceptual)

module StudentPerformance
  class Certification < ApplicationRecord
    self.table_name = "student_performance_certifications"

    enum status: { pending: 0, passed: 1, failed: 2 }
    enum source: { computed: 0, manual: 1 }

    belongs_to :lecture
    belongs_to :user
    belongs_to :certified_by, class_name: "User", optional: true
    belongs_to :rule, class_name: "StudentPerformance::Rule", optional: true

    validates :lecture_id, uniqueness: { scope: :user_id }
    validates :certified_by, presence: true, unless: :pending?
    validates :certified_at, presence: true, unless: :pending?

    def self.passed?(lecture:, user:)
      find_by(lecture: lecture, user: user)&.passed? || false
    end
  end
end

Integration with Registration::Policy

What it represents

Exam eligibility can be implemented as a Registration::Policy of kind: :student_performance. Policies are evaluated by phase: registration, finalization, or both. Unlike other policy types (email, deadline) that gate at runtime, student_performance policies enforce data completeness before the phase starts, then check Certification status at runtime.

Architecture Overview

The integration follows a clear separation of concerns:

StudentPerformance::Record (materialized data layer)

  • Stores what the student has achieved (points, achievements).
  • Is recomputed on demand to ensure freshness.

StudentPerformance::Certification (authoritative decision layer)

  • Stores teacher-declared pass/fail status per student.
  • Required to be complete before registration/finalization phases can start.
  • Policy checks this table at runtime, never calls Evaluator.

Registration::Policy (gating layer)

  • Enforces data prerequisites: certifications must be complete before phase starts.
  • Runtime evaluation: checks Certification.status == :passed for each student.
  • Never computes or proposes; just reads authoritative certification data.

StudentPerformance::Service (computation layer)

  • Aggregates assessment points and checks achievements.
  • Creates or updates the factual StudentPerformance::Record.
  • Used for background updates and teacher dashboards, not runtime gating.

Policy Configuration

When a teacher wants student performance gating, they add a Registration::Policy record that references the lecture whose active rule applies:

campaign.registration_policies.create!(
  kind: :student_performance,
  phase: :finalization, # or :registration or :both for completeness checks at both stages
  active: true,
  position: 1,
  config: { "lecture_id" => 42 }
)

The policy queries StudentPerformance::Rule.find_by(lecture_id: 42, active: true) to get the actual criteria for UI display, but enforcement is done via Certification table lookups.

Pre-flight Validation (Data Completeness)

Unlike other policy types, student_performance policies require data preparation before the phase starts:

Registration phase policy:

  • On campaign save: Warn if any lecture students lack a Certification (any status)
  • On campaign open: Hard-fail if any lecture students lack a Certification or have status: :pending
  • Runtime (student registers): Check Certification.status == :passed for that student

Finalization phase policy:

  • On finalize trigger: Hard-fail if any confirmed registrants lack a Certification or have status: :pending
  • Show remediation UI for teacher to resolve pending → passed/failed
  • Auto-reject students with status: :failed
  • Only materialize students with status: :passed

Config Field Reference

FieldTypePurpose
lecture_idIntegerWhich lecture this eligibility applies to (references that lecture's active StudentPerformance::Rule)

Why This Design?

Single source of truth: The StudentPerformance::Rule model defines what "sufficient performance" means. Registration policies just check "does this student have sufficient performance for lecture X?"

Benefits:

  1. No duplication: Criteria defined once in StudentPerformance::Rule
  2. Consistent across exams: Main exam and retake exam both reference the same performance requirements
  3. Easy updates: Professor changes rule.update!(min_percentage: 45) and all exam campaigns automatically use new threshold
  4. Clear separation: Rule defines performance criteria, policy gates registration based on those criteria

Example: A lecture has one StudentPerformance::Rule (50% points + presentation). Multiple exam campaigns (midterm, final, retake) all have Registration::Policy records with kind: :student_performance, config: { lecture_id: 42 }. All reference the same rule.

Policy Config Reference

Exam eligibility policies (Registration::Policy with kind: :student_performance) store only a minimal JSONB config:

{ "lecture_id": 42 }

All threshold and achievement criteria live in StudentPerformance::Rule (regular columns, associations). Changing a threshold (e.g. 50 → 45) is done by updating the rule record via a diff modal, not the policy config. The JSONB usage rationale (generic policy kinds need flexible keyed configs) is documented centrally in the Registration chapter; this chapter only notes the minimal linkage.

Runtime Evaluation

Once certifications are complete and the phase is open, evaluation is simple:

# Pseudo-code for Registration::Policy#evaluate(user) when kind == :student_performance
def eval_student_performance(user)
  lecture = Lecture.find(config["lecture_id"])

  cert = StudentPerformance::Certification.find_by(lecture: lecture, user: user)

  if cert&.passed?
    pass_result(:certification_passed)
  else
    fail_result(:certification_not_passed, "Lecture performance certification not passed")
  end
end

No Evaluator calls, no Service calls at registration time. Just a simple table lookup.

Flowchart: Student Performance Policy Flow

flowchart TD
  Start([Teacher adds student_performance policy]) --> Phase{Which phase?}

  Phase -- registration --> RegSetup[Campaign save: warn if certs incomplete]
  RegSetup --> RegOpen[Campaign open: fail if certs missing/pending]
  RegOpen --> RegRuntime[Student registers]
  RegRuntime --> CertCheck1[Check Certification.status]
  CertCheck1 -- passed --> Allow[Allow registration]
  CertCheck1 -- failed/missing --> Block[Block registration]

  Phase -- finalization --> FinSetup[Students register freely]
  FinSetup --> FinTrigger[Teacher clicks finalize]
  FinTrigger --> CertCheck2{All confirmed have certs?}
  CertCheck2 -- missing/pending --> Remediate[Show remediation UI]
  CertCheck2 -- all resolved --> AutoReject[Auto-reject failed certs]
  AutoReject --> Materialize[Materialize passed certs]

  Phase -- both --> BothReg[Registration phase flow]
  BothReg --> BothFin[+ Finalization phase flow]

Complete Example Walkthrough

Setup Phase:

  1. Professor creates Linear Algebra lecture with weekly homework assignments
  2. Professor creates achievements:
    presentation = Achievement.create!(
      lecture: linear_algebra,
      title: "Blackboard Presentation",
      value_type: :boolean
    )
    attendance = Achievement.create!(
      lecture: linear_algebra,
      title: "Lab Attendance",
      value_type: :numeric,
      threshold: 12
    )
    
  3. Professor creates performance rule (ONCE for the lecture):
    rule = StudentPerformance::Rule.create!(
      lecture: linear_algebra,
      min_percentage: 50,
      active: true
    )
    rule.required_achievements << [presentation, attendance]
    

Computation Phase:

  1. Semester progresses, students submit homework, tutors grade.
  2. After final homework deadline, a background job runs:
    StudentPerformance::Service.new(lecture: linear_algebra).compute_and_upsert_all_records!
    
  3. System creates StudentPerformance::Record entries:
    • Alice: 58/100 points (58%), achievements_met_ids: [1, 2]
    • Bob: 42/100 points (42%), achievements_met_ids: []
    • Carol: 65/100 points (65%), achievements_met_ids: [1]

Certification Phase:

  1. Professor opens Certification UI, clicks "Generate Proposals"
  2. Evaluator runs for all students, showing:
    • Alice: proposed_status: :passed (has points + achievements)
    • Bob: proposed_status: :failed (insufficient points)
    • Carol: proposed_status: :failed (missing attendance achievement)
  3. Professor reviews and bulk-creates certifications:
    # Alice: accept proposal
    StudentPerformance::Certification.create!(
      lecture: linear_algebra, user: alice,
      status: :passed, source: :computed,
      certified_by: professor, certified_at: Time.current,
      rule: rule
    )
    
    # Bob: accept proposal
    StudentPerformance::Certification.create!(
      lecture: linear_algebra, user: bob,
      status: :failed, source: :computed,
      certified_by: professor, certified_at: Time.current,
      rule: rule
    )   # Carol: manual override (medical exemption for attendance)
    StudentPerformance::Certification.create!(
      lecture: linear_algebra, user: carol,
      status: :passed, source: :manual,
      certified_by: professor, certified_at: Time.current,
      note: "Medical exemption for attendance requirement"
    )
    

Campaign Setup:

  1. Professor sets up exam campaign for final exam
  2. Professor adds registration policy:
    policy = campaign.registration_policies.create!(
      kind: :student_performance,
      phase: :both,
      config: { "lecture_id" => linear_algebra.id }
    )
    
  3. Campaign save: system checks all students have certifications (all have status, so warning clears)
  4. Professor clicks "Open Registration": system verifies no pending certifications remain (all passed/failed, so opens successfully)

Registration Phase:

  1. Alice attempts registration:
    • Policy evaluates: finds Certification with status: :passed
    • Registration succeeds
  2. Bob attempts registration:
    • Policy evaluates: finds Certification with status: :failed
    • Registration blocked with message
  3. Carol attempts registration:
    • Policy evaluates: finds Certification with status: :passed
    • Registration succeeds (manual override respected)

Finalization Phase:

  1. Professor clicks "Finalize Campaign"
  2. System checks finalization policies:
    • Alice: Certification=passed ✓
    • Carol: Certification=passed ✓
    • (Bob never registered, so not checked)
  3. System materializes exam roster with Alice and Carol

Policy Evaluation Guarantees

The integration provides several guarantees:

  1. Data Completeness: Pre-flight checks ensure certifications exist before phase opens
  2. Simple Runtime: Policy just checks Certification table (no computation during registration)
  3. Auditability: Certification stores who decided what and when
  4. Manual Override Support: Teachers can override via source: :manual
  5. Idempotency: Repeated checks with same certification data yield same result

Multiple Policies

A campaign can have multiple policies that must all pass:

# Example: Eligibility + enrollment deadline + course prerequisite
campaign.registration_policies.create!([
  { kind: :student_performance, position: 1, config: { ... } },
  { kind: :deadline, position: 2, config: { ... } },
  { kind: :course_prerequisite, position: 3, config: { ... } }
])

Policies are evaluated in position order. First failure stops evaluation and returns that failure to the user.


ERD

erDiagram
  LECTURE_PERFORMANCE_RECORD ||--|| LECTURE : "scoped to"
  LECTURE_PERFORMANCE_RECORD ||--|| USER : "tracks"
  LECTURE_PERFORMANCE_RULE ||--|| LECTURE : "configures"
  ACHIEVEMENT ||--|| LECTURE : "belongs to"
  LECTURE_PERFORMANCE_CERTIFICATION ||--|| LECTURE : "issued for"
  LECTURE_PERFORMANCE_CERTIFICATION ||--|| USER : "issued to"
  LECTURE_PERFORMANCE_CERTIFICATION }o--|| LECTURE_PERFORMANCE_RULE : "snapshot of"

Sequence Diagram

sequenceDiagram
  actor Teacher
  actor Student
  participant Rule as StudentPerformance::Rule
  participant Service as StudentPerformance::Service
  participant Record as StudentPerformance::Record
  participant Evaluator as StudentPerformance::Evaluator
  participant Cert as StudentPerformance::Certification
  participant Campaign as Registration::Campaign

  rect rgb(235, 245, 255)
  note over Teacher,Record: Stage 1: Rule Creation & Fact Computation
  Teacher->>Rule: create/update!(thresholds, achievements)
  Service->>Record: compute_and_upsert_all_records!(lecture)
  end

  rect rgb(255, 245, 235)
  note over Teacher,Cert: Stage 2: Teacher Certification (UI-driven)
  Teacher->>Evaluator: open Certification UI
  Evaluator->>Record: bulk_evaluate(all_records)
  Evaluator-->>Teacher: show proposals (passed/failed)
  Teacher->>Cert: bulk create/update (pending → passed/failed)
  Teacher->>Cert: manual overrides (source: manual)
  end

  rect rgb(245, 255, 245)
  note over Teacher,Campaign: Stage 3: Campaign Setup & Open
  Teacher->>Campaign: add student_performance policy (phase: registration)
  Campaign->>Cert: pre-flight: check all students have non-pending cert
  alt Missing/pending certs
    Campaign-->>Teacher: hard-fail: resolve X certifications
  end
  Campaign->>Campaign: status → open
  end

  rect rgb(235, 255, 235)
  note over Student,Campaign: Stage 4: Registration (runtime)
  Student->>Campaign: register
  Campaign->>Cert: check Certification.status == passed?
  alt Passed
    Campaign-->>Student: success
  else Failed/missing
    Campaign-->>Student: blocked
  end
  end

  rect rgb(255, 235, 235)
  note over Campaign,Cert: Stage 5: Finalization
  Campaign->>Service: recompute all facts (freshness)
  Campaign->>Cert: require passed for all confirmed
  alt Any failed
    Campaign->>Campaign: auto-reject those registrations
  end
  alt Any pending/missing
    Campaign-->>Teacher: hard-fail: remediation UI
  end
  Campaign->>Campaign: materialize allocation
  end

Proposed Folder Structure

app/
├── models/
│   ├── achievement.rb (top-level)
│   └── student_performance/
│       ├── record.rb
│       └── rule.rb
│
└── services/
  └── student_performance/
    └── service.rb

Key Files

  • app/models/achievement.rb - Top-level qualitative accomplishments (used across features)
  • app/models/student_performance/record.rb - Materialized performance status with recomputation support
  • app/models/student_performance/rule.rb - Eligibility criteria configuration
  • app/services/student_performance/service.rb - Performance computation logic with correctness guarantees

Database Tables

  • achievements - Top-level assessable type for qualitative accomplishments (integrates with Assessment infrastructure)
  • student_performance_records - Materialized per-user performance (facts only)
  • student_performance_rules - Eligibility criteria configuration per lecture
  • student_performance_rule_achievements - Join table linking rules to required achievements (ensures referential integrity)
  • student_performance_certifications - Teacher certification (pending/passed/failed) with audit and rule snapshot

Note

Column details for each table are documented in the respective model sections above.

Grading Schemes

View Documentation

For UI screens, mockups, and complete workflow documentation, see View Architecture: Exam Grading Workflow.

What is a 'Grading Scheme'?

A grading scheme is a systematic method for converting raw assessment points into final grade values.

  • Common Examples: "54-60 points = 1.0, 48-53 points = 1.3, ...", "30 points or more to pass (grade 4.0)", "90-100% = 1.0, 80-89% = 1.3, ..."
  • In this context: A configurable, versioned mapping applied to exam assessments to compute final grades from points using fixed bands.

Problem Overview

After an exam is graded and all points are recorded, MaMpf needs to:

  • Convert points to grades: Map raw points/percentages to grade values (e.g., German scale 1.0-5.0)
  • Support flexible config: Absolute point thresholds or percentage-based bands
  • Enable analysis: Show distribution statistics before applying scheme
  • Allow adjustments: Let instructors tweak cutoffs based on difficulty
  • Handle manual overrides: Respect individual grade adjustments for special cases
  • Ensure idempotency: Re-applying same scheme produces same results
  • Maintain audit trail: Track which scheme was applied when and by whom

Solution Architecture

We use a configurable scheme model with service-based application:

  • Canonical Source: GradeScheme::Scheme stores scheme configuration per assessment
  • Absolute Bands: JSON config defines grade bands with either absolute points or percentages
  • Service-Based Application: GradeScheme::Applier iterates participations and computes grades
  • Version Control: Hash-based versioning prevents duplicate applications
  • Override Respect: Manual grades bypass scheme application
  • Distribution Analysis: Service provides statistics for informed decision-making
  • Integration Point: Updates Assessment::Participation.grade_value field

GradeScheme::Scheme (ActiveRecord Model)

Grade Mapping Configuration

What it represents

A versioned configuration that defines how to convert assessment points into final grades for a specific assessment.

Think of it as

"The grading curve for the Linear Algebra final exam: 54+ points gets 1.0, 48-53 points gets 1.3, ... or alternatively 90%+ gets 1.0, 80-89% gets 1.3, ..."

The main fields and methods of GradeScheme::Scheme are:

Name/FieldType/KindDescription
assessment_idDB column (FK)The assessment this scheme applies to
kindDB column (Enum)Scheme type: currently only absolute
configDB column (JSONB)Scheme-specific configuration (bands, coefficients, etc.)
version_hashDB columnMD5 hash of config for idempotency checking
applied_atDB columnTimestamp when scheme was last applied (nil if draft)
applied_by_idDB column (FK)User who applied the scheme
activeDB columnBoolean: whether this is the currently active scheme
assessmentAssociationThe linked assessment
applied_byAssociationThe user who applied the scheme
compute_hashMethodGenerates deterministic hash from config
applied?MethodReturns true if scheme has been applied (applied_at present)

Behavior Highlights

  • Only one active scheme per assessment (enforced via validation)
  • Config structure varies by kind (see "Scheme Configurations" section below)
  • version_hash enables idempotency: applying identical config is a no-op
  • Draft schemes (not applied) can be edited freely
  • Applied schemes are immutable (create new version to change)

Example Implementation

module GradeScheme
  class Scheme < ApplicationRecord
    self.table_name = "grade_schemes"

    belongs_to :assessment, class_name: "Assessment::Assessment"
    belongs_to :applied_by, class_name: "User", optional: true

    enum kind: { absolute: 0 }

  validates :assessment_id, uniqueness: { scope: :active, if: :active? }
  validates :config, presence: true
  validate :config_matches_kind

  before_save :compute_hash, if: :config_changed?

  def applied?
    applied_at.present?
  end

  def compute_hash
    self.version_hash = Digest::MD5.hexdigest(config.to_json)
  end

  private

  def config_matches_kind
    case kind.to_sym
    when :absolute
      # Check for either absolute points or percentage-based bands
      has_bands = config["bands"].is_a?(Array)
      errors.add(:config, "must have bands array") unless has_bands
      
      if has_bands
        first_band = config["bands"].first
        has_points = first_band&.key?("min_points")
        has_pct = first_band&.key?("min_pct")
        
        unless has_points || has_pct
          errors.add(:config, "bands must have either min_points/max_points or min_pct/max_pct")
        end
      end
    end
  end
  end
end

Usage Scenarios

  • Creating a draft scheme: After an exam is graded, the professor creates: GradeScheme::Scheme.create!(assessment: exam_assessment, kind: :absolute, active: true, config: { bands: [...] }). The scheme is saved but applied_at remains nil.

  • Analyzing distribution: Before applying, the professor requests distribution stats: GradeScheme::Applier.new(scheme).analyze_distribution. This returns { min: 15, max: 98, mean: 72, median: 74, percentiles: { 10 => 45, 25 => 60, ... } }.

  • Adjusting cutoffs: Seeing the exam was harder than expected, the professor lowers cutoffs: scheme.update!(config: { bands: [...] }). The version_hash updates automatically.

  • Applying scheme: The professor finalizes: GradeScheme::Applier.new(scheme).apply!(applied_by: professor). All participations get grade_value computed, scheme.applied_at is set.

  • Preventing re-application: Someone tries to apply again: GradeScheme::Applier.new(scheme).apply!(applied_by: professor). The service checks version_hash, sees it matches, and returns early (idempotent).


Scheme Configurations

Configuration Overview

The config JSONB field contains scheme-specific configuration. Currently, MaMpf supports two grading approaches that are actively used in practice at Heidelberg University.

Scheme TypePrimary Use CaseConfig FormatStatus
Absolute PointsStandard approach - fixed point thresholdsmin_points/max_points✅ In use
Percentage-BasedCross-exam comparisonmin_pct/max_pct✅ In use
Interactive CurveUI convenience for teachersGenerates absolute config🚧 Planned
Percentile/LinearAdvanced statistical schemesN/A⏸️ Future

Absolute Cutoffs

When to use

Use absolute points when: Students and instructors think in terms of concrete point values ("You need 30 points to pass"), which is standard in German mathematics education.

Config structure (typical 60-point exam):

{
  "bands": [
    { "min_points": 54, "max_points": 60, "grade": "1.0" },
    { "min_points": 48, "max_points": 53, "grade": "1.3" },
    { "min_points": 42, "max_points": 47, "grade": "1.7" },
    { "min_points": 36, "max_points": 41, "grade": "2.0" },
    { "min_points": 33, "max_points": 35, "grade": "2.3" },
    { "min_points": 30, "max_points": 32, "grade": "3.0" },
    { "min_points": 27, "max_points": 29, "grade": "3.7" },
    { "min_points": 24, "max_points": 26, "grade": "4.0" },
    { "min_points": 0, "max_points": 23, "grade": "5.0" }
  ]
}

Field reference:

FieldTypeDescriptionExample
min_pointsIntegerLower boundary (inclusive)54
max_pointsIntegerUpper boundary (inclusive)60
gradeStringGerman grade value"1.0"

Why absolute points are preferred

  1. Clarity: "You need 30 points to pass" is clearer than "You need 50%"
  2. Transparency: Easier to discuss individual exercises and their point values
  3. Cultural fit: Matches traditional German grading practice
  4. Precision: Avoids rounding issues with percentage calculations
  5. Flexibility: Instructors can adjust bands based on exam difficulty without percentage confusion

Grading example (60-point exam):

StudentScorePercentageGradeResult
Alice55 pts91.67%1.0Excellent
Bob38 pts63.33%2.0Good
Carol28 pts46.67%3.7Sufficient
Dave22 pts36.67%5.0Failed

Percentage-Based Cutoffs

When to use

Use percentages when: You need to compare performance across multiple assessments with different maximum points, or want universal standards independent of exam length.

Config structure:

{
  "bands": [
    { "min_pct": 90, "max_pct": 100, "grade": "1.0" },
    { "min_pct": 80, "max_pct": 89.99, "grade": "1.3" },
    { "min_pct": 70, "max_pct": 79.99, "grade": "1.7" },
    { "min_pct": 60, "max_pct": 69.99, "grade": "2.0" },
    { "min_pct": 55, "max_pct": 59.99, "grade": "2.3" },
    { "min_pct": 50, "max_pct": 54.99, "grade": "3.0" },
    { "min_pct": 45, "max_pct": 49.99, "grade": "3.7" },
    { "min_pct": 40, "max_pct": 44.99, "grade": "4.0" },
    { "min_pct": 0, "max_pct": 39.99, "grade": "5.0" }
  ]
}

Field reference:

FieldTypeDescriptionExample
min_pctFloatLower boundary percentage (inclusive)90.0
max_pctFloatUpper boundary percentage (inclusive)100.0
gradeStringGerman grade value"1.0"

Grading example (same students, 60-point exam):

StudentScorePercentageGradeResult
Alice55 pts91.67%1.0Excellent
Bob38 pts63.33%2.0Good
Carol28 pts46.67%3.7Sufficient
Dave22 pts36.67%5.0Failed

Detection Logic

The GradeScheme::Applier automatically detects whether min_points/max_points or min_pct/max_pct is used by inspecting the first band. Both formats use the same kind: :absolute enum value.

Interactive Curve Generation (Frontend Convenience)

Implementation Status

Backend: ✅ Already supported via absolute scheme
Frontend: 🚧 Planned - Interactive UI needs to be built

Design Philosophy

The backend only needs to support the absolute scheme with bands. The "curve generation" is purely a frontend convenience feature that produces valid absolute configs by helping teachers visualize and set boundaries.

Comparison of UI approaches:

ApproachSpeedFlexibilityBest ForStatus
Two-Point Auto⚡⚡⚡ Fast⭐⭐ MediumStandard linear curvesRecommended starter
Manual Drawing⚡ Slow⭐⭐⭐ HighCustom non-linear curvesPower users
Hybrid⚡⚡ Medium⭐⭐⭐ HighAuto-generate + tweakRecommended

Approach 1: Two-Point Auto-Generation

Quick Setup

Teacher sets just two anchors ("54+ gets 1.0", "30+ gets 4.0"), system fills in the rest via linear interpolation.

Workflow:

  1. Teacher opens grading UI and sees histogram of all student scores
  2. Teacher selects "Auto-generate from two points" mode
  3. Teacher drags two markers on the histogram:
    • Excellence threshold: "54 points and above get 1.0"
    • Passing threshold: "30 points and above get 4.0 (pass)"
  4. Frontend calculates linear interpolation for intermediate grades
  5. Frontend displays preview: "54-60→1.0, 48-53→1.3, 42-47→1.7, ..."
  6. Teacher can manually adjust any band boundary if desired (see below)
  7. Frontend sends final absolute config to backend

Example JavaScript helper:

function generateLinearBands(excellentPts, passingPts, maxPts, gradeSteps) {
  const range = excellentPts - passingPts;
  const stepSize = range / (gradeSteps.length - 1);
  
  const bands = gradeSteps.map((grade, index) => {
    const minPts = Math.round(passingPts + (stepSize * index));
    const maxPts = index === 0 ? maxPts : 
                   Math.round(passingPts + (stepSize * (index + 1)) - 1);
    return { min_points: minPts, max_points: maxPts, grade };
  });
  
  // Add fail band below passing threshold
  bands.push({ min_points: 0, max_points: passingPts - 1, grade: "5.0" });
  
  return bands.sort((a, b) => b.min_points - a.min_points);
}

// Usage
const bands = generateLinearBands(54, 30, 60, 
  ["1.0", "1.3", "1.7", "2.0", "2.3", "3.0", "3.7", "4.0"]
);
// Send to backend: { kind: "absolute", config: { bands } }
BenefitDescription
⚡ SpeedQuick setup for standard linear grading curves
🎯 SimplicityTeacher doesn't need to think about each boundary individually
🔧 FlexibilityCan still be manually tweaked afterward

Approach 2: Full Manual Curve Drawing

Maximum Control

Teacher drags individual boundary markers for each grade on the histogram.

Workflow:

  1. Teacher opens grading UI and sees histogram
  2. Teacher selects "Manual curve" mode
  3. Teacher drags individual boundary markers for each grade:
    • Drag "1.0/1.3 boundary" to set where 1.0 ends and 1.3 begins
    • Drag "1.3/1.7 boundary" to adjust next boundary
    • ... (continues for all grades)
  4. Frontend displays current band configuration
  5. Frontend sends complete absolute config to backend
BenefitDescription
🎨 FlexibilityMaximum control - teacher adjusts every boundary
📊 Non-linearCan create custom curves (generous with top grades, strict with passing)
🧠 IntentionalWorks for any grading philosophy

Best of Both Worlds

Auto-generate initial bands from two points, then allow manual tweaking of individual boundaries.

Workflow:

  1. Teacher selects "Auto-generate from two points"
  2. System generates initial bands via linear interpolation
  3. Teacher can then manually adjust individual boundaries:
    • "Hmm, let me be more generous with 1.0"
    • Drags the 1.0 minimum from 54 down to 50
    • System either:
      • Option A: Auto-adjusts neighboring bands to fill gaps
      • Option B: Shows warning "Gap detected between 1.0 and 1.3"
  4. Teacher previews grade distribution with adjusted boundaries
  5. Frontend sends final absolute config to backend

Example of manual adjustment after auto-generation:

// Auto-generated
const initialBands = generateLinearBands(54, 30, 60, grades);
// [{ min: 54, max: 60, grade: "1.0" }, { min: 48, max: 53, grade: "1.3" }, ...]

// Teacher drags 1.0 boundary down to 50
function adjustBand(bands, gradeToAdjust, newMinPoints) {
  const index = bands.findIndex(b => b.grade === gradeToAdjust);
  bands[index].min_points = newMinPoints;
  
  // Auto-adjust next band to avoid gaps
  if (index < bands.length - 1) {
    bands[index + 1].max_points = newMinPoints - 1;
  }
  
  return bands;
}

const adjustedBands = adjustBand(initialBands, "1.0", 50);
// [{ min: 50, max: 60, grade: "1.0" }, { min: 46, max: 49, grade: "1.3" }, ...]

Recommended UI elements:

ElementPurpose
📊 HistogramShows score distribution of all students
📍 Draggable markersBoundary markers overlaid on histogram for visual adjustment
👁️ Live preview"X students would get 1.0, Y would fail, ..."
🔄 Reset buttonRegenerate from current two-point anchors
🥧 Distribution chartPie/bar chart showing final grade distribution

Backend Simplicity

  • Backend receives only { kind: "absolute", config: { bands: [...] } }
  • Doesn't know or care how bands were generated (manual, auto, or hybrid)
  • No special "two-point" or "curve" scheme type needed
  • Simple validation: bands must not overlap, must cover 0 to max_points

Interactive UI Workflow (Hybrid Approach)

flowchart TD
    Start([Teacher opens grading UI]) --> LoadData[Load all student scores from assessment]
    LoadData --> ShowHistogram[Display histogram of score distribution]
    
    ShowHistogram --> ChooseMode{Teacher chooses mode}
    
    ChooseMode -->|Two-Point Auto| TwoPoint[Select two-point auto-generation]
    ChooseMode -->|Manual Drawing| Manual[Select manual curve mode]
    
    TwoPoint --> DragMarkers[Drag two markers on histogram]
    DragMarkers --> CalcInterpolation[Frontend calculates linear interpolation]
    CalcInterpolation --> GenerateBands[Generate bands array with all grades]
    GenerateBands --> ShowPreview
    
    Manual --> DragBoundaries[Drag individual grade boundaries]
    DragBoundaries --> BuildManualBands[Build bands from boundary positions]
    BuildManualBands --> ShowPreview
    
    ShowPreview[Show preview with histogram overlay] --> DisplayStats[Display grade distribution stats]
    
    DisplayStats --> TeacherReview{Teacher satisfied?}
    
    TeacherReview -->|No - adjust| AdjustChoice{Adjustment type?}
    AdjustChoice -->|Tweak specific band| DragOneBoundary[Drag single boundary marker]
    AdjustChoice -->|Reset and retry| ResetButton[Click reset button]
    
    DragOneBoundary --> AutoAdjustNeighbor[Auto-adjust neighboring band to avoid gaps]
    AutoAdjustNeighbor --> UpdatePreview[Update preview with new distribution]
    UpdatePreview --> DisplayStats
    
    ResetButton --> TwoPoint
    
    TeacherReview -->|Yes| ValidateBands{Bands valid?}
    
    ValidateBands -->|No gaps/overlaps| BuildConfig[Build config JSON]
    ValidateBands -->|Issues found| ShowWarning[Show validation warning]
    ShowWarning --> TeacherReview
    
    BuildConfig --> SendToBackend[Send POST request to backend]
    SendToBackend --> BackendValidate[Backend validates config]
    
    BackendValidate --> SaveScheme[Save GradeScheme::Scheme record]
    SaveScheme --> Success([Scheme created successfully])
    
    style Start fill:#e1f5ff
    style Success fill:#d4edda
    style ShowHistogram fill:#fff3cd
    style ShowPreview fill:#fff3cd
    style BuildConfig fill:#ffeaa7
    style ShowWarning fill:#f8d7da

Future Extensions

Not Currently Needed

Additional grading schemes (percentile-based ranking, piecewise mapping, etc.) could be added if needed, but are not currently in use at Heidelberg and thus not implemented.

The flexible JSONB config structure makes it easy to add new scheme types without database migrations.


GradeScheme::Applier (Service Object)

Grade Computer

What it represents

A service that applies a grading scheme to an assessment's participations, computing and persisting final grades.

Think of it as

The "grade calculator" that transforms points into grades according to the configured scheme.

Public Interface

MethodPurpose
initialize(scheme)Sets up the applier with a specific grade scheme
analyze_distributionReturns statistics about current points distribution
apply!(applied_by:)Computes grades for all participations and persists them
previewShows what grades would be assigned without persisting

Behavior Highlights

  • Idempotent: Checks version_hash before applying; skip if already applied
  • Manual override respect: Skips participations with manual_grade_override flag
  • Transaction-safe: Uses database transaction for consistency
  • Efficient: Single query to load all participations, batch updates
  • Statistics: Analyzes distribution for informed decision-making

Scheme Application Workflow

UI Workflow

The exam grading workflow progresses through four distinct phases with dedicated UI screens. See View Architecture: Exam Grading Workflow for detailed mockups and phase-by-phase UI progression.

High-level phases:

  1. Phase 1: Point Entry — Teachers enter task points for each student; grade column remains empty
  2. Phase 2: Distribution Analysis — View histogram, statistics, and percentiles of achieved points
  3. Phase 3: Scheme Configuration — Set excellence/passing thresholds (Two-Point Auto) or manually define grade boundaries (Manual Curve)
  4. Phase 4: Scheme Applied — Grades auto-computed; point edits trigger automatic grade recalculation
flowchart TD
    Start([Exam grading complete]) --> CreateScheme[Professor creates draft scheme]
    CreateScheme --> AnalyzeDist[Analyze distribution statistics]
    
    AnalyzeDist --> ViewStats[View: min, max, mean, median, percentiles]
    ViewStats --> InitialConfig[Set initial config bands]
    
    InitialConfig --> Preview[Preview grade distribution]
    Preview --> ReviewResults[Review: How many pass/fail?]
    
    ReviewResults --> Satisfied{Satisfied with<br/>distribution?}
    
    Satisfied -->|No - too harsh| LowerThreshold[Lower cutoff thresholds]
    Satisfied -->|No - too lenient| RaiseThreshold[Raise cutoff thresholds]
    
    LowerThreshold --> UpdateConfig[Update scheme config]
    RaiseThreshold --> UpdateConfig
    UpdateConfig --> Preview
    
    Satisfied -->|Yes| Apply[Apply scheme to all participations]
    Apply --> Transaction[Database transaction starts]
    
    Transaction --> CheckHash{version_hash<br/>already applied?}
    CheckHash -->|Yes| SkipAll[Skip - idempotent]
    CheckHash -->|No| IterateParticipations[Iterate all participations]
    
    IterateParticipations --> CheckOverride{Has manual<br/>override?}
    CheckOverride -->|Yes| SkipStudent[Skip this student]
    CheckOverride -->|No| ComputeGrade[Compute grade from points]
    
    ComputeGrade --> UpdateGrade[Update grade_value]
    UpdateGrade --> MoreStudents{More students?}
    SkipStudent --> MoreStudents
    
    MoreStudents -->|Yes| CheckOverride
    MoreStudents -->|No| MarkApplied[Mark scheme as applied]
    
    MarkApplied --> Commit[Commit transaction]
    SkipAll --> Done([Application complete])
    Commit --> Done
    
    style Start fill:#e1f5ff
    style Done fill:#d4edda
    style Apply fill:#fff3cd
    style Satisfied fill:#ffeaa7

Grade Auto-Update

After scheme application, if a teacher edits any task points for a student, the grade recalculates automatically based on the new total points. This prevents forgotten manual updates and keeps grades consistent with the configured scheme.

Grade Computation Algorithm

flowchart TD
    Start([compute_grade_for participation]) --> GetPoints[Get points_total from participation]
    GetPoints --> GetMaxPoints[Get effective_total_points from assessment]
    
    GetMaxPoints --> InspectBand[Inspect first band in config]
    InspectBand --> CheckFormat{Band format?}
    
    CheckFormat -->|Has min_points| AbsoluteFlow[Use absolute points scheme]
    CheckFormat -->|Has min_pct| PercentageFlow[Use percentage scheme]
    CheckFormat -->|Neither| Fallback[Return grade 5.0 - malformed config]
    
    AbsoluteFlow --> SortAbsBands[Sort bands by min_points descending]
    SortAbsBands --> FindAbsBand[Find band where:<br/>points >= min_points AND<br/>points <= max_points]
    FindAbsBand --> AbsFound{Band found?}
    AbsFound -->|Yes| ReturnAbsGrade[Return band grade]
    AbsFound -->|No| Return50Abs[Return grade 5.0]
    
    PercentageFlow --> CalcPct[Calculate percentage:<br/>points / max_points * 100]
    CalcPct --> SortPctBands[Sort bands by min_pct descending]
    SortPctBands --> FindPctBand[Find band where:<br/>percentage >= min_pct AND<br/>percentage <= max_pct]
    FindPctBand --> PctFound{Band found?}
    PctFound -->|Yes| ReturnPctGrade[Return band grade]
    PctFound -->|No| Return50Pct[Return grade 5.0]
    
    ReturnAbsGrade --> End([Grade returned])
    Return50Abs --> End
    ReturnPctGrade --> End
    Return50Pct --> End
    Fallback --> End
    
    style Start fill:#e1f5ff
    style End fill:#d4edda
    style Fallback fill:#f8d7da
    style Return50Abs fill:#f8d7da
    style Return50Pct fill:#f8d7da
    style ReturnAbsGrade fill:#d4edda
    style ReturnPctGrade fill:#d4edda

Example Implementation

module GradeScheme
  class Applier
    def initialize(scheme)
      @scheme = scheme
      @assessment = scheme.assessment
    end

  def analyze_distribution
    participations = @assessment.participations.where(status: :graded)
    points = participations.pluck(:points_total)
    max_points = @assessment.effective_total_points

    {
      count: points.size,
      min: points.min,
      max: points.max,
      mean: points.sum.to_f / points.size,
      median: points.sort[points.size / 2],
      percentiles: calculate_percentiles(points),
      max_possible: max_points
    }
  end

  def apply!(applied_by:)
    return if already_applied?

    Assessment::Participation.transaction do
      participations = @assessment.participations.where(status: :graded)
      
      participations.each do |participation|
        next if participation.manual_grade_override?
        
        grade = compute_grade_for(participation)
        participation.update!(grade_value: grade)
      end

      @scheme.update!(applied_at: Time.current, applied_by: applied_by)
    end
  end

  def preview
    participations = @assessment.participations.where(status: :graded)
    
    participations.map do |p|
      {
        user_id: p.user_id,
        points: p.points_total,
        percentage: percentage_for(p),
        proposed_grade: compute_grade_for(p),
        current_grade: p.grade_value
      }
    end
  end

  private

  def already_applied?
    @scheme.applied? && @scheme.version_hash == @scheme.compute_hash
  end

  def compute_grade_for(participation)
    points = participation.points_total
    max_points = @assessment.effective_total_points
    
    # Determine if using absolute points or percentage-based
    first_band = @scheme.config["bands"].first
    
    if first_band.key?("min_points")
      apply_absolute_points_scheme(points)
    elsif first_band.key?("min_pct")
      percentage = percentage_for(participation)
      apply_percentage_scheme(percentage)
    else
      "5.0" # Fallback if config is malformed
    end
  end

  def percentage_for(participation)
    max = @assessment.effective_total_points
    return 0 if max.zero?
    (participation.points_total.to_f / max * 100).round(2)
  end

  def apply_absolute_points_scheme(points)
    bands = @scheme.config["bands"].sort_by { |b| -b["min_points"] }
    band = bands.find { |b| points >= b["min_points"] && points <= b["max_points"] }
    band ? band["grade"] : "5.0"
  end

  def apply_percentage_scheme(percentage)
    bands = @scheme.config["bands"].sort_by { |b| -b["min_pct"] }
    band = bands.find { |b| percentage >= b["min_pct"] && percentage <= b["max_pct"] }
    band ? band["grade"] : "5.0"
  end

  def calculate_percentiles(points)
    sorted = points.sort
    {
      10 => sorted[(sorted.size * 0.1).floor],
      25 => sorted[(sorted.size * 0.25).floor],
      50 => sorted[(sorted.size * 0.5).floor],
      75 => sorted[(sorted.size * 0.75).floor],
      90 => sorted[(sorted.size * 0.9).floor]
    }
  end
  end
end

Usage Scenarios

  • Preview before applying: Professor wants to see results: preview = GradeScheme::Applier.new(scheme).preview. They review the proposed grades and see that 5 students would fail.

  • Adjust and re-preview: Professor lowers the passing threshold: scheme.update!(config: { ... }), then previews again. Now only 2 students fail, which seems fair.

  • Final application: Professor applies: GradeSchemeApplier.new(scheme).apply!(applied_by: professor). All 150 students get their grade_value set.

  • Manual override: One student had exceptional circumstances. The tutor marks: participation.update!(manual_grade_override: true, grade_value: "2.0"). Future scheme applications will skip this record.

  • Idempotent reapplication: System accidentally triggers apply again: GradeSchemeApplier.new(scheme).apply!(applied_by: professor). The service detects identical version_hash and returns immediately.


Integration with Assessment System

What it represents

Grading schemes extend the Assessment system by providing automated grade computation for assessments that track points.

Relationship to Assessment::Gradable

The Assessment::Gradable concern already provides:

  • grade_value field on Assessment::Participation
  • Manual grade entry capability

Grading schemes add:

  • Automated computation from points
  • Configurable mapping logic
  • Version control and audit trail
  • Distribution analysis tools

When to Use

ScenarioUse Grading Scheme?
Homework assignments❌ No - just track points
Midterm exam with grade✅ Yes - convert points to grade
Final exam with grade✅ Yes - convert points to grade
Seminar talk presentation❌ No - manual grade entry suffices
Combined course grade✅ Yes (future) - weight multiple assessments

Usage Scenarios

  • After exam grading: All task points are entered and Assessment::Participation.points_total values are computed. The professor creates a GradeScheme::Scheme to convert these points to final grades.

  • Manual grade override: A student with exceptional circumstances gets participation.manual_grade_override = true and a direct grade entry. When the scheme is applied, this participation is skipped.

  • Re-grading scenario: A mistake is found in one student's exam. The tutor corrects their task points. The points_total updates via callback. The professor could re-apply the scheme (with same config) to update just that grade, but the idempotency check would skip all unchanged participations.


ERD

erDiagram
    GRADE_SCHEME_SCHEME ||--|| ASSESSMENT : "applies to"
    GRADE_SCHEME_SCHEME }o--|| USER_APPLIED_BY : "applied by"
    ASSESSMENT ||--o{ PARTICIPATION : "has"
    PARTICIPATION ||--o| GRADE_VALUE : "computed by scheme"

Sequence Diagram

sequenceDiagram
    actor Professor
    participant Assessment as Assessment::Assessment
    participant Scheme as GradeScheme::Scheme
    participant Applier as GradeScheme::Applier
    participant Participation as Assessment::Participation

    rect rgb(235, 245, 255)
    note over Professor,Participation: Phase 1: Exam Grading Complete
    Assessment->>Assessment: all task points entered
    Assessment->>Participation: points_total computed for all students
    end

    rect rgb(255, 245, 235)
    note over Professor,Scheme: Phase 2: Scheme Configuration
    Professor->>Scheme: create(kind: absolute, config: {...})
    Professor->>Applier: analyze_distribution
    Applier->>Participation: aggregate points statistics
    Applier-->>Professor: show distribution (mean, percentiles)
    Professor->>Scheme: update(config: adjusted_bands)
    end

    rect rgb(245, 255, 245)
    note over Professor,Participation: Phase 3: Preview & Apply
    Professor->>Applier: preview
    Applier->>Participation: compute proposed grades
    Applier-->>Professor: show grade preview
    Professor->>Applier: apply!(applied_by: professor)
    Applier->>Applier: check version_hash (idempotency)
    loop for each participation
        alt not manual override
            Applier->>Participation: update(grade_value: computed_grade)
        else manual override
            Applier->>Participation: skip
        end
    end
    Applier->>Scheme: update(applied_at, applied_by)
    end

Proposed Folder Structure

app/
└── models/
    └── grade_scheme/
        ├── scheme.rb
        └── applier.rb

Key Files

  • app/models/grade_scheme/scheme.rb - Versioned scheme configuration
  • app/models/grade_scheme/applier.rb - Grade computation and application logic

Database Tables

  • grade_schemes - Scheme configurations with version control

Note

Column details are documented in the GradeScheme::Scheme model section above.

End-to-End Workflow

This chapter walks through a complete semester lifecycle, showing how all the components from previous chapters work together in practice.

Reading Guide

Each phase below shows the Goal, Key Actions, and Technical Flow for that stage of the semester. Follow the phases sequentially to understand how registration flows into grading, which then feeds into eligibility and exam registration.

Phase 0: Semester Setup

Setup Phase

At the start of the semester, staff configures the basic teaching structure.

Staff Actions:

ActionDetails
Create LectureSet up the lecture record (e.g., "Linear Algebra WS 2024/25")
Create TutorialsDefine tutorial groups with times, locations, and capacities
(Optional) Create TalksFor seminars, define talk slots for student presentations

Phase 1: Tutorial/Talk Registration Campaign

Goal

Assign students to tutorial groups or seminar talks

Staff Actions:

ActionDetails
Create CampaignStaff creates a Registration::Campaign for the lecture
Set ModeChoose allocation_mode: first_come_first_served or preference_based
Add ItemsCreate one Registration::Item for each tutorial or talk
Attach PoliciesAdd Registration::Policy records (e.g., institutional_email, prerequisite_campaign)
Open CampaignMake available for student registrations (registration requests)

Student Experience:

Two Registration Modes

  • FCFS Mode: Visit page, check eligibility, register if eligible (immediate confirmation/rejection)
  • Preference Mode: Visit page, check eligibility, rank options if eligible, wait for allocation

Technical Flow:

  • Eligibility check via Campaign#evaluate_policies_for happens when user visits the campaign page
  • Ineligible users see an error message explaining the reason
  • Eligible users see the registration interface (register buttons for FCFS, preference form for preference-based)
  • Each registration request creates a Registration::UserRegistration with status pending (preference-based) or confirmed/rejected (FCFS)
  • Registration::PolicyEngine evaluates all active policies in order during the initial eligibility check

Phase 2: Preference-Based Allocation (if applicable)

Goal

Compute optimal assignment respecting preferences and constraints

Only for Preference-Based Campaigns

This phase is skipped if the campaign uses first_come_first_served mode.

Staff Actions:

  • At or after registration deadline, staff triggers campaign.allocate_and_finalize!
  • Campaign status transitions: openclosedprocessingcompleted

Technical Details:

AspectImplementation
ServiceRegistration::AllocationService delegates to solver (e.g., Min-Cost Flow)
Cost ModelPreferences treated as costs (rank 1 = cost 1, rank 2 = cost 2, etc.)
ConstraintsRespects capacity from Registerable#capacity
OutputOne confirmed UserRegistration per user, rejects others
IdempotencyOperation can be re-run if needed with same results

Phase 3: Allocation Materialization

Goal

Apply confirmed registrations to domain model rosters

Staff Actions:

  • Staff calls campaign.finalize!
  • Registration::AllocationMaterializer iterates through all Registration::Item records
  • For each item, collects confirmed user IDs and calls registerable.materialize_allocation!(user_ids:, campaign:)

Domain Effects:

ModelEffect
TutorialStudent rosters updated
TalkSpeaker assignments updated
ExamBefore writing roster, eligibility is revalidated; ineligible users are excluded
AuthorityRosters are now the authoritative source for course operations
IdempotencySame inputs produce same results (can be re-run safely)

Phase 4: Post-Allocation Roster Maintenance

Goal

Handle late registrations, drops, and moves

Staff Operations via Roster::MaintenanceService:

OperationMethodPurpose
Transfermove_user!(from:, to:)Move student between tutorials
Addadd_user!(to:)Add late arrival
Removeremove_user!(from:)Remove dropout

Guardrails

  • Service enforces capacity limits (unless allow_overfill: true)
  • All operations are transactional (atomic)
  • Changes are logged for audit trail
  • Operates on domain rosters directly, independent of campaign

Phase 5: Coursework Assessments & Grading

Goal

Track student performance on assignments and presentations

Setup Flow:

StepAction
1. Create AssessmentFor each Assignment, create linked Assessment::Assessment with requires_points: true
2. Seed ParticipationsCall assessment.seed_participations_from!(user_ids: tutorial.roster_user_ids)
3. Define TasksCreate Assessment::Task records for each problem/component
4. Student SubmissionStudents upload Submission records (possibly as teams)
5. GradingTutors grade via Assessment::SubmissionGrader

Grading Flow:

Team Grading Fan-Out

Service creates Assessment::TaskPoint for each team member automatically. Points are validated against Task#max_points, and Participation#points_total is recomputed automatically.

Publication:

  • Staff publishes results by setting assessment.results_published = true

For Talks (Simplified):

AspectDifference
Moderequires_points: false (grade-only mode)
SeedingSeed from talk speaker roster
GradingRecord final grade_value directly on Assessment::Participation

Phase 6: Achievement Tracking

Goal

Record qualitative accomplishments for eligibility

Staff Actions:

  • Staff creates Achievement records for students
  • Examples: blackboard_presentation, class_participation, peer_review
  • These augment quantitative points for eligibility determination

Phase 7: Student Performance Materialization

Goal

Materialize student performance facts for all lecture students.

Scope

Performance data is computed for all students enrolled in the lecture (e.g., 150 students), not just those who plan to register. This provides transparency and legal compliance: every student can verify their eligibility status.

Staff Configuration:

  • Staff configures the StudentPerformance::Rule for the lecture (minimum points, required achievements, etc.).
  • A background job runs StudentPerformance::Service.compute_and_upsert_all_records!(lecture), which populates or updates the StudentPerformance::Record for every student in the lecture.

Materialized Data in StudentPerformance::Record:

FieldContent
points_totalSum of relevant coursework points earned so far.
achievements_metA set of Achievement IDs the student has earned.
computed_atTimestamp of last factual recomputation.
rule_idForeign key to the StudentPerformance::Rule used.

Factual Data Only

The Record stores only raw factual data (points, achievements). It does NOT store eligibility status or interpretations. Those are determined later during teacher certification.

Staff Actions:

  • Staff reviews the materialized records to verify correctness
  • Staff can trigger manual recomputation if needed
  • Records are ready for teacher certification (next phase)

Staff Actions:

  • Staff reviews the materialized records to verify correctness
  • Staff can trigger manual recomputation if needed
  • Records are ready for teacher certification (next phase)

Phase 8: Teacher Certification

Goal

Teachers review materialized performance data and certify eligibility decisions for all students.

The Certification Step

This is where human judgment enters the process. Teachers use the StudentPerformance::Evaluator to generate eligibility proposals, then review and certify them, creating persistent StudentPerformance::Certification records.

Staff Workflow:

StepActionTechnical Detail
1. Generate ProposalsStaff triggers StudentPerformance::Evaluator.bulk_proposals(lecture)Creates proposals for all students based on Record + Rule
2. Review ProposalsStaff reviews the Certification DashboardShows proposed status (passed/failed) for each student
3. Bulk AcceptStaff clicks "Accept All Proposals" (common case)Creates Certification records with status: :passed or :failed
4. Manual OverridesFor exceptional cases, staff manually overrides individual certificationsSets custom status with required note field
5. Verify CompletenessSystem checks all lecture students have certificationsRequired before campaigns can open

Certification Data (StudentPerformance::Certification):

FieldContent
user_idForeign key to student
lecture_idForeign key to lecture
record_idForeign key to the performance Record (optional)
rule_idForeign key to Rule used (optional, for audit)
statusEnum: passed, failed, or pending
noteTeacher's note (required for manual overrides)
certified_atTimestamp of certification
certified_by_idForeign key to teacher who certified

Pending Status

Certifications with status: :pending are considered incomplete. Campaigns cannot open or finalize until all certifications are resolved to passed or failed.

Rule Change Handling:

Rule Updates After Certification

If staff modifies the StudentPerformance::Rule after certifications exist:

  • System shows a "Rule Changed" warning
  • Staff can view diff: "12 students would change: failed → passed"
  • Staff must review and re-certify affected students
  • System marks old certifications as stale

Recomputation Triggers:

TriggerEffect
Grade ChangeRecord recomputed, Certification marked for review
Achievement AddedRecord recomputed, Certification marked for review
Rule ModifiedAll certifications marked for review

Phase 9: Exam Registration Campaign

Goal

Allow eligible students to register for the exam (FCFS).

Complete Exam Documentation

For full details on the Exam model, see Exam Model.

Campaign Setup:

StepAction
1. Create ExamStaff creates the Exam record with date, location, and capacity.
2. Create CampaignStaff creates a Registration::Campaign for the exam.
3. Attach PolicyAdd a Registration::Policy with kind: :student_performance.
4. Optional PoliciesMay also attach other policies (e.g., institutional_email).
5. OpenThe campaign opens for registrations.

Phase 9: Exam Registration Campaign

Goal

Allow eligible students to register for the exam (FCFS).

Complete Exam Documentation

For full details on the Exam model, see Exam Model.

Pre-Flight Checks:

Campaign Cannot Open Without Complete Certifications

Before staff can transition a campaign to open status:

  1. System verifies all lecture students have StudentPerformance::Certification records
  2. All certifications must have status: :passed or :failed (no pending)
  3. If checks fail, campaign opening is blocked with clear error message

Campaign Setup:

StepAction
1. Create ExamStaff creates the Exam record with date, location, and capacity.
2. Create CampaignStaff creates a Registration::Campaign for the exam.
3. Attach PolicyAdd a Registration::Policy with kind: :student_performance, phase: :registration.
4. Optional PoliciesMay also attach other policies (e.g., institutional_email).
5. Pre-Flight CheckSystem validates certification completeness.
6. OpenThe campaign opens for registrations (only if pre-flight passes).

Student Experience:

  • Students see their eligibility status based on their Certification.status
  • Only students with status: :passed certifications can successfully register
  • Registration is first-come-first-served until capacity is reached
  • Students receive immediate confirmation or rejection with a reason

Registration Flow:

graph LR
    A[Student Attempts Registration] --> B[PolicyEngine Evaluates]
    B --> C{Student Performance Policy}
    C --> D[Lookup Certification]
    D --> E{Certification.status}
    E -->|:passed| F[Continue to next policy]
    E -->|:failed| G[Reject: Not eligible]
    E -->|:pending| H[Reject: Certification incomplete]
    F --> I[All Policies Pass]
    I --> J[Confirm Registration]

No Runtime Recomputation

Unlike the old approach, the registration flow does NOT trigger any recomputation. It simply looks up the pre-existing Certification record and checks its status. This ensures consistency and prevents race conditions.

After Registration Phase:

  • Campaign remains open while students register
  • When registration deadline is reached, staff calls campaign.close!
  • Campaign transitions to processing status
  • Staff then proceeds to finalization (Phase 10)

After Registration Phase:

  • Campaign remains open while students register
  • When registration deadline is reached, staff calls campaign.close!
  • Campaign transitions to processing status
  • Staff then proceeds to finalization (Phase 10)

Phase 10: Exam Registration Finalization

Goal

Materialize confirmed exam registrations to the exam roster.

Pre-Finalization Checks:

Finalization Requires Complete Certifications

Before staff can finalize the campaign:

  1. System re-validates that all lecture students have certifications
  2. All certifications must still be passed or failed (no pending)
  3. System checks if any certifications are marked as stale (due to rule changes or record updates)
  4. If any issues exist, finalization is blocked with a remediation prompt

Remediation Workflow:

IssueResolution
Pending CertificationsStaff must resolve to passed or failed
Stale CertificationsStaff must review and re-certify affected students
Missing CertificationsSystem auto-generates proposals, staff must certify

Finalization Process:

StepAction
1. ValidationSystem runs pre-finalization checks
2. Eligibility FilterOnly confirmed registrants with Certification.status: :passed are included
3. MaterializationCalls exam.materialize_allocation!(user_ids:, campaign:)
4. Status UpdateCampaign transitions to completed

Post-Finalization State:

  • Exam Roster now contains subset of eligible students who registered (e.g., 85 of 126 eligible)
  • Staff views Exam Roster screen to manage participants
  • Roster is ready for exam administration (room assignments, grading)

Two Distinct Lists

  • Certification Dashboard (Phase 8): All 150 lecture students with eligibility status
  • Exam Roster (Phase 10+): Only 85 registered students who will take the exam

The roster is used for exam administration, while certifications remain for audit/legal purposes.


Phase 11: Exam Grading & Grade Schemes

Goal

Record exam scores and assign final grades

Grading Setup:

StepAction
1. Create AssessmentAfter exam is administered, create Assessment::Assessment for the exam
2. Seed ParticipationsFrom confirmed exam registrants
3. Define TasksCreate Assessment::Task records for each exam problem
4. Enter PointsTutors enter points via grading interface
5. AggregatePoints aggregate to Participation#points_total

Grade Scheme Application:

Converting Points to Grades

Staff analyzes score distribution (histogram, percentiles), then creates and applies a GradeScheme::Scheme.

StepProcess
AnalyzeView distribution statistics and histogram
ConfigureCreate GradeScheme::Scheme with absolute point bands or percentage cutoffs
ApplyCall GradeScheme::Applier.apply!(scheme)
ResultService computes grade_value for each participation based on points
OverrideManual adjustments possible for exceptional cases

Multiple Choice Exam Extension

For exams with multiple choice components requiring legal compliance, see the Multiple Choice Exams chapter for the two-stage grading workflow.

Final Result:

  • Students have both granular points (TaskPoint records) and final grade (Participation#grade_value)

Phase 11: Exam Grading & Grade Schemes

Goal

Record exam scores and assign final grades

Grading Setup:

StepAction
1. Create AssessmentAfter exam is administered, create Assessment::Assessment for the exam
2. Seed ParticipationsFrom confirmed exam registrants
3. Define TasksCreate Assessment::Task records for each exam problem
4. Enter PointsTutors enter points via grading interface
5. AggregatePoints aggregate to Participation#points_total

Grade Scheme Application:

Converting Points to Grades

Staff analyzes score distribution (histogram, percentiles), then creates and applies a GradeScheme::Scheme.

StepProcess
AnalyzeView distribution statistics and histogram
ConfigureCreate GradeScheme::Scheme with absolute point bands or percentage cutoffs
ApplyCall GradeScheme::Applier.apply!(scheme)
ResultService computes grade_value for each participation based on points
OverrideManual adjustments possible for exceptional cases

Multiple Choice Exam Extension

For exams with multiple choice components requiring legal compliance, see the Multiple Choice Exams chapter for the two-stage grading workflow.

Final Result:

  • Students have both granular points (TaskPoint records) and final grade (Participation#grade_value)

Phase 12: Late Adjustments & Recomputation

Scenario

A student's coursework grade changes after the initial bulk computation.

System Response:

TriggerAction
Grade ChangeThe system automatically triggers StudentPerformance::Service.compute_and_upsert_record_for(user).
UpdateThe factual data (points_total, achievements_met) in the student's StudentPerformance::Record is updated.
PreserveAny manual override_status on the record remains untouched.
EffectThe next time the student's eligibility is checked (e.g., on the overview screen or during an exam registration attempt), the Evaluator will use the new factual data, providing an up-to-date status.

Phase 12: Late Adjustments & Recomputation

Scenario

A student's coursework grade changes after certifications have been created.

System Response:

TriggerAction
Grade ChangeThe system automatically triggers StudentPerformance::Service.compute_and_upsert_record_for(user).
UpdateThe factual data (points_total, achievements_met) in the student's StudentPerformance::Record is updated.
Mark StaleThe associated Certification is marked for review (e.g., needs_review: true flag).
NotifySystem alerts staff that certifications need re-review.
Re-CertifyStaff must review and re-certify before any new campaigns can open.

Certification Stability

Once a certification is created, it remains valid until explicitly updated by staff, even if the underlying Record changes. This ensures consistency during active registration campaigns.


Phase 13: Reporting & Administration

Goal

Ongoing monitoring and data integrity

Ongoing Activities:

ActivitySource
Participation ReportsAssessment::Participation data
Eligibility ExportStudentPerformance::Record
Registration AuditRegistration::UserRegistration
Roster AdjustmentsRoster::MaintenanceService as needed
Data IntegrityBackground jobs monitoring consistency

Key Invariants Throughout the Workflow

System Constraints

These constraints are maintained across all phases to ensure data integrity.


Phase 13: Reporting & Administration

Goal

Ongoing monitoring and data integrity

Ongoing Activities:

ActivitySource
Participation ReportsAssessment::Participation data
Eligibility ExportStudentPerformance::Certification
Registration AuditRegistration::UserRegistration
Roster AdjustmentsRoster::MaintenanceService as needed
Data IntegrityBackground jobs monitoring consistency

Key Invariants Throughout the Workflow

System Constraints

These constraints are maintained across all phases to ensure data integrity.

InvariantDescription
One Record per (lecture, user)StudentPerformance::Record uniqueness.
One Certification per (lecture, user)StudentPerformance::Certification uniqueness.
One Participation per (assessment, user)Assessment::Participation uniqueness.
One Confirmed Registration per (user, campaign)Registration::UserRegistration constraint.
One TaskPoint per (participation, task)Assessment::TaskPoint uniqueness.
Idempotent Materializationmaterialize_allocation! produces the same results with the same inputs.
Ordered Policy EvaluationShort-circuits on the first failure.
Certification CompletenessCampaigns cannot open or finalize without complete, non-pending certifications.
Certification StabilityExisting certifications remain valid until explicitly updated by staff.
Phase-Aware PoliciesOnly policies matching the current phase are evaluated.
Exam Assessment TimingCreated only after exam registration closes.

Chronological Summary

High-Level Flow

A bird's-eye view of the complete workflow from setup to final grades.

PhaseSummary
SetupCreate domain models → Configure registrables & rosters
RegistrationOpen campaign → Students register → (Optional: Run solver) → Materialize to rosters
CourseworkSeed participations → Define tasks → Students submit → Tutors grade → Publish results
PerformanceRecord achievements → Compute performance records
CertificationGenerate proposals → Teachers review → Create certifications → Verify completeness
Exam RegistrationPre-flight checks → Open campaign → Students register → Close campaign
FinalizationValidate certifications → Materialize to exam roster → Complete campaign
Exam GradingSeed exam participations → Grade tasks → Apply grade scheme → Publish grades
OngoingMaintain rosters → Update grades → Recompute records → Re-certify as needed

Sequence Diagram

sequenceDiagram
    participant Student
    participant Campaign
    participant Solver
    participant Materializer
    participant Rosterable
    participant Assessment
    participant PerfService as StudentPerformance::Service
    participant PerfRecord as StudentPerformance::Record
    participant Teacher
    participant Evaluator as StudentPerformance::Evaluator
    participant Certification as StudentPerformance::Certification
    participant ExamCampaign
    participant Policy as Registration::Policy

    Student->>Campaign: Visit campaign page
    Campaign->>Policy: Evaluate eligibility (registration phase)
    alt Not eligible
        Campaign-->>Student: Show error with reason
    else Eligible
        Campaign-->>Student: Show registration interface
        Student->>Campaign: Submit registration (FCFS or preference)
    end
    Note over Campaign: If preference mode...
    Campaign->>Solver: Run assignment at deadline
    Solver-->>Campaign: Set confirmed/rejected

    Campaign->>Materializer: finalize!
    Materializer->>Rosterable: materialize_allocation!(user_ids)
    Note over Rosterable: Tutorial/Talk rosters updated

    Assessment->>Rosterable: Seed participations from roster
    Student->>Assessment: Submit coursework
    Note over Assessment: Tutors grade → TaskPoints created

    Assessment->>PerfService: Grading events trigger updates
    PerfService->>PerfRecord: compute_and_upsert_record_for(user)
    Note over PerfRecord: Factual record updated (points, achievements)

    rect rgb(255, 245, 235)
    note over Teacher,Certification: Teacher Certification Phase
    Teacher->>Evaluator: bulk_proposals(lecture)
    Evaluator->>PerfRecord: Read all records
    Evaluator-->>Teacher: Generate eligibility proposals
    Teacher->>Teacher: Review proposals
    Teacher->>Certification: Create certifications (passed/failed)
    Note over Certification: All students certified
    end

    rect rgb(235, 245, 255)
    note over Student,Policy: Exam Registration Phase
    Teacher->>ExamCampaign: Attempt to open campaign
    ExamCampaign->>Certification: Pre-flight: verify completeness
    Certification-->>ExamCampaign: All certifications complete
    ExamCampaign->>ExamCampaign: Transition to open

    Student->>ExamCampaign: Visit exam registration page
    ExamCampaign->>Policy: Evaluate student_performance policy
    Policy->>Certification: Lookup certification for student
    Certification-->>Policy: Return status (passed/failed)
    alt Passed certification
        Policy-->>ExamCampaign: Pass result
        ExamCampaign-->>Student: Show register button
        Student->>ExamCampaign: Click register
        ExamCampaign-->>Student: Confirm registration
    else Failed/pending certification
        Policy-->>ExamCampaign: Fail result
        ExamCampaign-->>Student: Show error (not eligible)
    end
    end

    rect rgb(245, 255, 245)
    note over Teacher,Rosterable: Finalization Phase
    Teacher->>ExamCampaign: Trigger finalize!
    ExamCampaign->>Certification: Validate completeness again
    Certification-->>ExamCampaign: All complete
    ExamCampaign->>Rosterable: materialize_allocation!(eligible_user_ids)
    Note over Rosterable: Exam roster materialized
    end

    Note over Assessment: After exam...
    Assessment->>Assessment: Grade exam tasks
    Assessment->>Assessment: Apply grade scheme
    Assessment-->>Student: Final grades published

Allocation Algorithm Details

Purpose

This chapter details the algorithm for allocating users to Registration::Items (tutorials, talks, etc.) based on ranked preferences while respecting item capacities.

The initial implementation uses a Min-Cost Flow algorithm for its speed and simplicity. The system is designed with a pluggable service interface, allowing a more powerful CP-SAT solver to be used in the future when advanced constraints are needed.


The Strategy Pattern Approach

The system uses a Strategy Pattern to separate the high-level allocation process from the low-level solver implementation. A single service entry point is exposed:

Registration::AllocationService.new(campaign, strategy: :min_cost_flow).allocate!

This allows different solver strategies to be added (e.g., strategy: :cp_sat) without changing any calling code.

Why Start With Min-Cost Flow?

  • Fast and Simple: It has very low model-building overhead and is extremely fast for bipartite assignment problems with linear costs.
  • Good Fit: It perfectly matches the current requirements of the system.
  • Debuggable: The underlying graph model is transparent and easier to debug operationally.

When To Migrate / Offer CP-SAT

A CP-SAT solver should be implemented if any of the following advanced constraints are needed:

  • Fairness Tiers: Lexicographic minimization (e.g., first minimize unassigned users, then minimize users with their 2nd choice, etc.).
  • Mutual Exclusion: A user cannot be assigned to two simultaneous events.
  • Group Assignment: Two or more users must be assigned to the same item.
  • Soft Constraints: Complex penalties for time-of-day, instructor preference, etc.
  • Quotas: Diversity constraints or per-track limits.

Performance

Typical wall time for a campaign with 1,000 users, 50 items, and 3–10 preferences each:

SolverBuild + Solve TimeNotes
SimpleMinCostFlow~1–5 msVery stable, scales well.
CP-SAT (simple model)~15–60 msSolver overhead dominates; power unused.
CP-SAT (complex constraints)~50–300 msStill acceptable for background jobs.

Conclusion: Min-Cost Flow is more than sufficient for the initial scope and scales well. Even with 10,000 users, it remains sub-second.


Modeling Details (Min-Cost Flow)

Graph Components

  • Source (S): The starting point for all "flow" (users).
  • User Nodes (U): One node for each participating user.
  • Item Nodes (I): One node for each available Registration::Item.
  • Dummy Node (D): An optional node representing the "unassigned" state.
  • Sink (T): The final destination for all flow.

Graph Arcs (Edges)

  • S → u: For each user u, an arc with capacity 1 and cost 0.
  • u → i: For each stated preference, an arc from user u to item i with capacity 1 and cost equal to the preference_rank.
  • i → T: For each item i, an arc with capacity equal to item.capacity and cost 0.
  • u → D: (If allowing unassigned) An arc from each user u to the dummy node D with capacity 1 and a high penalty cost.
  • D → T: (If allowing unassigned) An arc from D to T with capacity equal to the total number of users.

This model guarantees that exactly one unit of flow per user leaves the source, ensuring each user is assigned to at most one real item.

Cost Calibration

  • Preference Rank: The cost is the rank itself (1, 2, 3...).
  • Fallback Cost: For "fill unlisted items" mode, the cost is max_rank + 2.
  • Penalty Cost: The cost for the "unassigned" dummy path is a large constant (e.g., 10_000) to ensure it's only used as a last resort.

Failure Modes

With the dummy node enabled, the model should always find a feasible solution. A failure would only occur due to an internal solver error or corrupted input data (e.g., negative capacities).


Unassigned semantics and defaults

  • Defaults we use in tutorial campaigns:
    • fill_unlisted: true at campaign level, no per-student opt-in. This adds edges from each user to all eligible, unranked items at cost max_rank + 2, ensuring a high chance of receiving a seat even beyond the ranked list.
    • allow_unassigned: true with a large dummy penalty. This guarantees feasibility; the dummy path is used only if every eligible item is saturated.
  • After allocation and upon close-out/finalization, any remaining pending registrations must be normalized to rejected. A user is considered "assigned" if they have exactly one confirmed registration in the campaign; otherwise they are "unassigned".
  • The "unassigned" cohort is derived data: users who participated in the campaign but ended up with no confirmed registration. No extra tables are required.

Service Implementation (Strategy Pattern Skeleton)

# filepath: app/services/registration/allocation_service.rb
module Registration
    class AllocationService
        def initialize(campaign, strategy: :min_cost_flow, **opts)
            @campaign = campaign
            @strategy = strategy
            @opts = opts
        end

    def allocate!
            solver =
                case @strategy
                when :min_cost_flow
                    Registration::Solvers::MinCostFlow.new(@campaign, **@opts)
                # when :cp_sat then Registration::Solvers::CpSat.new(@campaign, **@opts)
                else
                    raise ArgumentError, "Unknown strategy #{@strategy}"
                end
            solver.run
        end
    end
end

# Solvers are placed in their own module for organization.
module Registration
    module Solvers
    class MinCostFlow
            BIG_PENALTY = 10_000

            def initialize(campaign, fill_unlisted: false, allow_unassigned: true)
                @campaign = campaign
                @fill_unlisted = fill_unlisted
                @allow_unassigned = allow_unassigned
                @prefs = campaign.user_registrations.where.not(preference_rank: nil)
                                                                                        .includes(:registration_item)
                @users = @prefs.map(&:user_id).uniq
                @items = campaign.registration_items.includes(:registerable)
                @prefs_by_user = @prefs.group_by(&:user_id)
                @max_rank = (@prefs.map(&:preference_rank).max || 1)
                @fallback_cost = @max_rank + 2
            end

            def run
                return finalize_empty if @users.empty?
                build_and_solve
            end

            private

            def finalize_empty
                @campaign.update!(status: 'completed')
            end

            def build_and_solve
                mcf = ORTools::SimpleMinCostFlow.new

                source = 0
                user_offset = 1
                item_offset = user_offset + @users.size
                sink_real = item_offset + @items.size
                dummy_node = sink_real + 1 if @allow_unassigned
                sink_final = @allow_unassigned ? dummy_node + 1 : sink_real

                idx_user = {}
                idx_item = {}

                @users.each_with_index { |uid, i| idx_user[uid] = user_offset + i }
                @items.each_with_index { |item, i| idx_item[item.id] = item_offset + i }

                mcf.set_node_supply(source, @users.size)
                mcf.set_node_supply(sink_final, -@users.size)
                (user_offset...item_offset).each { |n| mcf.set_node_supply(n, 0) }
                (item_offset...sink_real).each { |n| mcf.set_node_supply(n, 0) }
                if @allow_unassigned
                    mcf.set_node_supply(sink_real, 0)
                    mcf.set_node_supply(dummy_node, 0)
                end

                @users.each do |uid|
                    mcf.add_arc_with_capacity_and_unit_cost(source, idx_user[uid], 1, 0)
                end

                @prefs.each do |reg|
                    mcf.add_arc_with_capacity_and_unit_cost(
                        idx_user[reg.user_id],
                        idx_item[reg.registration_item_id],
                        1,
                        reg.preference_rank.to_i <= 0 ? 1 : reg.preference_rank.to_i
                    )
                end

                if @fill_unlisted
                    @users.each do |uid|
                        listed = (@prefs_by_user[uid] || []).map(&:registration_item_id)
                        (@items.map(&:id) - listed).each do |iid|
                            mcf.add_arc_with_capacity_and_unit_cost(
                                idx_user[uid],
                                idx_item[iid],
                                1,
                                @fallback_cost
                            )
                        end
                    end
                end

                @items.each do |item|
                    cap = [item.registerable.capacity.to_i, 0].max
                    mcf.add_arc_with_capacity_and_unit_cost(
                        idx_item[item.id],
                        (@allow_unassigned ? sink_real : sink_final),
                        cap,
                        0
                    )
                end

                if @allow_unassigned
                    mcf.add_arc_with_capacity_and_unit_cost(dummy_node, sink_final, @users.size, 0)
                    @users.each do |uid|
                        mcf.add_arc_with_capacity_and_unit_cost(idx_user[uid], dummy_node, 1, BIG_PENALTY)
                    end
                end

                status = mcf.solve
                return fail_solver unless status == ORTools::SimpleMinCostFlow::OPTIMAL

                apply_solution(mcf, idx_user, idx_item, dummy_node)
            end
        end
    end
end

Graph Diagram (Placeholder)

graph LR
	S((S)) --> U1((User1))
	S --> U2((User2))
	U1 --> I1[(ItemA)]
	U1 --> I2[(ItemB)]
	U2 --> I2
	I1 --> T((T))
	I2 --> T
	U1 --> D((Dummy))
	U2 --> D
	D --> T

Examples & Demos

Unified End-to-End Demo (Phases 0–12)

This demo walks through a complete semester lifecycle, from setup to final reporting. It assumes the models and services from the architectural documentation are implemented.

# --- Phase 0: Semester Setup ---
# Create lecture
lecture = FactoryBot.create(:lecture_with_sparse_toc, "with_title",
                            title: "Linear Algebra I")

# Create users simulating mixed email domains (valid + invalid)
domains = %w[student.uni.edu uni.edu gmail.com]
users = (1..12).map do |i|
  FactoryBot.create(:confirmed_user,
                    email: "user#{i}@#{domains[i % domains.size]}",
                    name: "User #{i}")
end

# Create tutorials
tutorials = (1..3).map do |n|
  FactoryBot.create(:tutorial, lecture: lecture, title: "Tutorial #{n}", capacity: 10)
end

# --- Phase 1: Tutorial Registration ---
tut_campaign = Registration::Campaign.create!(
  campaignable: lecture,
  title: "Tutorial Registration WS 2024/25",
  allocation_mode: :preference_based,
  registration_deadline: 10.days.from_now
)

# Create registration items for each tutorial
tutorials.each do |tut|
  tut_campaign.registration_items.create!(registerable: tut)
end

# Add institutional email policy
tut_campaign.registration_policies.create!(
  kind: :institutional_email,
  position: 1,
  active: true,
  config: { "allowed_domains" => ["uni.edu", "student.uni.edu"] }
)

tut_campaign.update!(status: :open)

# Users submit ranked preferences.
# In a real UI, the controller would use `evaluate_policies_for(user)` before
# allowing a submission.
eligible_submitters = users.select do |u|
  tut_campaign.evaluate_policies_for(u).pass
end
ri_map = tut_campaign.registration_items.index_by(&:registerable_id)

eligible_submitters.each do |user|
  shuffled = tutorials.shuffle
  shuffled.each_with_index do |tut, rank|
    Registration::UserRegistration.create!(
      user: user,
      registration_campaign: tut_campaign,
      registration_item: ri_map[tut.id],
      status: :pending,
      preference_rank: rank + 1
    )
  end
end

# --- Phase 2: Tutorial Allocation ---
# Close registration and run allocation algorithm
tut_campaign.update!(registration_deadline: Time.current - 1.second)
tut_campaign.allocate_and_finalize!

# --- Phase 3: Roster Materialization ---
# Materialization happens automatically within `allocate_and_finalize!`
puts "\nTutorial Rosters After Materialization:"
tutorials.each do |tut|
  # Assuming a `roster_user_ids` method exists on the registerable
  puts "  #{tut.title}: #{tut.roster_user_ids.size} students"
end

# --- Phase 4: Roster Maintenance ---
# Move one student from Tutorial 1 to Tutorial 2
from_tut = tutorials.first
to_tut = tutorials.second
student_to_move_id = from_tut.roster_user_ids.first

if student_to_move_id
  # Assuming a Roster Maintenance service exists
  puts "Moved student #{student_to_move_id} from #{from_tut.title} to #{to_tut.title}"
end

# --- Phase 5: Coursework Assessments ---
# Create two homework assignments with tasks
hw1 = FactoryBot.create(:assignment, lecture: lecture, title: "Homework 1")
hw1_assessment = FactoryBot.create(:assessment, assessable: hw1, title: "Homework 1")
(1..3).each { |i| hw1_assessment.tasks.create!(title: "Problem #{i}", max_points: 10) }

hw2 = FactoryBot.create(:assignment, lecture: lecture, title: "Homework 2")
hw2_assessment = FactoryBot.create(:assessment, assessable: hw2, title: "Homework 2")
(1..3).each { |i| hw2_assessment.tasks.create!(title: "Problem #{i}", max_points: 10) }

# Seed participations from tutorial rosters
lecture_students = users.select { |u| u.email.ends_with?("uni.edu") || u.email.ends_with?("student.uni.edu") }
[hw1_assessment, hw2_assessment].each do |assessment|
  lecture_students.each do |student|
    FactoryBot.create(:participation, assessment: assessment, user: student)
  end
end

# Simulate grading with random points
[hw1_assessment, hw2_assessment].each do |assessment|
  assessment.participations.find_each do |part|
    total_points = 0
    assessment.tasks.each do |task|
      points = rand((task.max_points * 0.4)..task.max_points)
      FactoryBot.create(:task_point, participation: part, task: task, points: points)
      total_points += points
    end
    part.update!(points_total: total_points, status: :graded)
  end
end

# --- Phase 6: Achievement Tracking ---
# Award achievements to first three eligible students
eligible_submitters.first(3).each do |u|
  FactoryBot.create(:achievement,
    lecture: lecture,
    user: u,
    kind: "blackboard_explanation",
    achievable: lecture
  )
end
puts "\nAchievements awarded to #{eligible_submitters.first(3).map(&:name).join(', ')}"

# --- Phase 7: Student Performance Materialization ---
# Configure eligibility rule
rule = StudentPerformance::Rule.find_or_create_by!(lecture: lecture)
rule.update!(
  min_points: 30, # 50% of 60 total points
  required_achievements: { "blackboard_explanation" => 1 },
  assessment_types: ["Assignment"]
)

# Compute performance facts for all students (e.g., via a background job)
service = StudentPerformance::Service.new(lecture)
service.compute_and_upsert_all_records!

puts "\nPerformance records computed for #{lecture_students.size} students"

# --- Phase 8: Exam Registration ---
# Create an exam belonging to the lecture
exam = FactoryBot.create(:exam,
  lecture: lecture,
  title: "Hauptklausur",
  date: 4.weeks.from_now,
  capacity: 100
)

# The lecture (campaignable) hosts the exam registration campaign
exam_campaign = lecture.registration_campaigns.create!(
  title: "Hauptklausur Registration",
  allocation_mode: :first_come_first_served,
  registration_deadline: 2.weeks.from_now,
  status: :open
)

# The exam is the sole registerable item
exam_campaign.registration_items.create!(registerable: exam)

puts "\nExam campaign created: #{exam_campaign.title} (deadline: #{exam_campaign.registration_deadline})"

# --- Phase 8: Teacher Certification ---
# Generate eligibility proposals using the Evaluator
evaluator = StudentPerformance::Evaluator.new(rule)
proposals = {}

lecture_students.each do |student|
  record = StudentPerformance::Record.find_by(lecture: lecture, user: student)
  result = evaluator.evaluate(record)
  proposals[student.id] = result.status
end

puts "\nProposals generated: #{proposals.values.count(:passed)} passed, #{proposals.values.count(:failed)} failed"

# Teacher reviews and bulk-accepts proposals
teacher = users.first # Assuming first user is the teacher
lecture_students.each do |student|
  StudentPerformance::Certification.create!(
    user: student,
    lecture: lecture,
    record: StudentPerformance::Record.find_by(lecture: lecture, user: student),
    rule: rule,
    status: proposals[student.id],
    certified_at: Time.current,
    certified_by: teacher
  )
end

eligible_count = StudentPerformance::Certification.where(lecture: lecture, status: :passed).count
puts "Certifications created: #{eligible_count} students certified as passed"

# Override one failed student (e.g., medical certificate)
failed_cert = StudentPerformance::Certification.find_by(lecture: lecture, status: :failed)
if failed_cert
  failed_cert.update!(
    status: :passed,
    note: "Medical certificate provided",
    certified_at: Time.current,
    certified_by: teacher
  )
  puts "Manual override: Student #{failed_cert.user.name} status changed to passed"
end

# --- Phase 9: Exam Registration Campaign ---
# Create exam belonging to the lecture
exam = FactoryBot.create(:exam, lecture: lecture, capacity: 100)

# The lecture (campaignable) hosts the exam registration campaign
exam_campaign = Registration::Campaign.create!(
  campaignable: lecture,
  title: "Hauptklausur Registration",
  allocation_mode: :first_come_first_served,
  registration_deadline: 2.weeks.from_now
)
# The exam is the sole registerable item
exam_item = exam_campaign.registration_items.create!(registerable: exam)

# Add policies: student performance + institutional email
exam_campaign.registration_policies.create!(
  kind: :student_performance,
  position: 1,
  active: true,
  config: { "lecture_id" => lecture.id }
)
exam_campaign.registration_policies.create!(
  kind: :institutional_email,
  position: 2,
  active: true,
  config: { "allowed_domains" => ["uni.edu", "student.uni.edu"] }
)
# --- Phase 9: Exam Registration Campaign ---
# Create exam first
exam = FactoryBot.create(:exam, lecture: lecture, capacity: 100)

# Create registration campaign
exam_campaign = Registration::Campaign.create!(
  campaignable: exam,
  title: "Final Exam Registration",
  allocation_mode: :first_come_first_served,
  registration_deadline: 2.weeks.from_now,
  status: :draft
)
exam_item = exam_campaign.registration_items.create!(registerable: exam)

# Add policies: student performance + institutional email
exam_campaign.registration_policies.create!(
  kind: :student_performance,
  position: 1,
  active: true,
  phase: :registration,
  config: { "lecture_id" => lecture.id }
)
exam_campaign.registration_policies.create!(
  kind: :institutional_email,
  position: 2,
  active: true,
  phase: :registration,
  config: { "allowed_domains" => ["uni.edu", "student.uni.edu"] }
)

# Pre-flight check: verify certification completeness before opening
all_certified = lecture_students.all? do |student|
  cert = StudentPerformance::Certification.find_by(lecture: lecture, user: student)
  cert.present? && cert.status.in?([:passed, :failed])
end

if all_certified
  exam_campaign.update!(status: :open)
  puts "\nExam campaign opened (all students certified)"
else
  puts "\nCampaign opening blocked: incomplete certifications"
  exit
end

# Eligible students register for exam. Policy checks Certification status.
puts "\nExam Registration Process:"
lecture_students.each do |user|
  result = exam_campaign.evaluate_policies_for(user, phase: :registration)
  if result.pass
    puts "  - Student #{user.name}: Eligible (certification: passed). Registering..."
    Registration::UserRegistration.create!(
      user: user,
      registration_campaign: exam_campaign,
      registration_item: exam_item,
      status: :confirmed
    )
  else
    cert = StudentPerformance::Certification.find_by(lecture: lecture, user: user)
    puts "  - Student #{user.name}: Ineligible. Certification status: #{cert&.status || 'missing'}"
  end
end

# Finalize campaign (materializes exam roster after re-checking certifications)
exam_campaign.finalize!
puts "\n#{exam_campaign.user_registrations.confirmed.count} students registered for exam"

# --- Phase 10: Exam Grading ---
# Create assessment for exam
exam_assessment = FactoryBot.create(:assessment, assessable: exam, title: "Final Exam")
task1 = exam_assessment.tasks.create!(title: "Problem 1", max_points: 40)
task2 = exam_assessment.tasks.create!(title: "Problem 2", max_points: 30)
task3 = exam_assessment.tasks.create!(title: "Problem 3", max_points: 30)

# Seed participations from exam roster
exam_campaign.user_registrations.confirmed.each do |reg|
  FactoryBot.create(:participation, assessment: exam_assessment, user: reg.user)
end

# Simulate grading
exam_assessment.participations.find_each do |part|
  points = rand(40..100)
  part.update!(points_total: points)
end

# Create and apply grade scheme
exam_scheme = GradeScheme::Scheme.create!(
  title: "Final Exam Grading",
  bands: [
    { "min_points" => 90, "grade" => "1.0" },
    { "min_points" => 80, "grade" => "2.0" },
    { "min_points" => 70, "grade" => "3.0" },
    { "min_points" => 60, "grade" => "4.0" },
    { "min_points" => 0, "grade" => "5.0" }
  ]
)
# Assuming an applier service exists
# GradeScheme::Applier.new(exam_assessment, exam_scheme).apply!

puts "\nExam graded (conceptual)."

# --- Phase 11: Late Adjustments ---
# Simulate late homework grade change
late_part = hw1_assessment.participations.first
old_points = late_part.points_total
late_part.update!(points_total: old_points + 5)
puts "\nLate adjustment: Student #{late_part.user.name} HW1 points: #{old_points} → #{late_part.points_total}"

# The change triggers record recomputation and marks certification as stale
service.compute_and_upsert_record_for(late_part.user)
cert = StudentPerformance::Certification.find_by(lecture: lecture, user: late_part.user)
puts "  - Performance record recomputed"
puts "  - Certification marked for review (teacher must re-certify before next campaign)"

# Teacher must review and re-certify
# In real workflow, teacher would see "Certification Stale" warning in UI
# and must manually review before opening new campaigns

# --- Phase 12: Reporting & Export ---
puts "\n=== Final Report ==="
puts "Lecture: #{lecture.title}"
puts "Total students in course: #{lecture_students.size}"
puts "Tutorial registrations: #{tut_campaign.user_registrations.confirmed.count}"
puts "Exam registered: #{exam_campaign.user_registrations.confirmed.count}"

Key Observations

This demo illustrates:

  1. Complete Lifecycle: All phases from setup to reporting.
  2. Three-Layer Architecture:
    • Records store factual performance data (points, achievements)
    • Evaluator generates eligibility proposals (computational layer)
    • Certifications capture teacher decisions (authoritative layer)
  3. Pre-Flight Checks: Campaigns cannot open without complete certifications, ensuring policy consistency.
  4. No Runtime Recomputation: Exam registration looks up pre-existing Certifications instead of computing eligibility on-the-fly.
  5. Roster Management: Materialization populates tutorial and exam rosters from confirmed registrations.
  6. Late Adjustments: Grade changes trigger record recomputation and mark certifications for teacher review, but existing certifications remain valid until manually updated.
  7. Composable Policies: The exam campaign combines student_performance and institutional_email policies seamlessly, with phase-aware evaluation.
  8. Teacher Control: Teachers explicitly review and certify all eligibility decisions, maintaining accountability and enabling manual overrides.

Architectural Consistency

This demo follows the decoupled architecture. All model names, service calls, and workflows match the documented design.

Integrity & Invariants

This chapter documents the key invariants and integrity constraints that ensure system correctness throughout the semester lifecycle.

Enforcement Strategy

When feasible, enforce constraints at the database level. For complex business rules, use application-level validations and background reconciliation jobs.


1. Registration & Allocation

Database Constraints

# One confirmed submission per user per campaign
add_index :registration_submissions,
          [:registration_campaign_id, :user_id],
          unique: true,
          where: "status = 'confirmed'",
          name: "idx_unique_confirmed_submission"

# Unique preference ranks per user per campaign
add_index :registration_submissions,
          [:user_id, :registration_campaign_id, :preference_rank],
          unique: true,
          where: "preference_rank IS NOT NULL",
          name: "idx_unique_preference_rank"

Application-Level Invariants

InvariantEnforcement
Registration::UserRegistration.status ∈ {pending, confirmed, rejected}Enum validation
At most one confirmed submission per (user, campaign)Unique index
Preference-based campaigns: each pending submission has unique rankUnique index + validation
Capacity never exceeded at allocationAllocation algorithm respects registerable.capacity
Campaign finalized exactly oncefinalize! idempotent with status check
assigned_count matches confirmed submissionsBackground reconciliation job
Assigned users = confirmed UserRegistrations (registration data)Count from Registration::UserRegistration.where(status: :confirmed)
Allocated users = materialized roster (domain data)Count from rosterable.allocated_user_ids
After finalization: assigned users = allocated usersMaterialization ensures consistency

2. Rosters & Materialization

Core Invariants

InvariantDetails
Initial roster snapshotmaterialize_allocation! sets roster to match confirmed submissions
Historical integrityPost-allocation roster changes don't mutate Registration::UserRegistration records
Atomic operationsRoster::MaintenanceService uses transactions
Capacity enforcementEnforced unless explicit override by staff
Audit trailAll roster changes logged with actor, reason, timestamp

Reconciliation

Background job periodically checks:

  • Roster user count vs. capacity limit
  • Orphan roster entries (user deleted but still in roster)

3. Assessments & Grading

Database Constraints

# One participation per (assessment, user)
add_index :assessment_participations,
          [:assessment_id, :user_id],
          unique: true,
          name: "idx_unique_participation"

# One task point per (participation, task)
add_index :assessment_task_points,
          [:participation_id, :task_id],
          unique: true,
          name: "idx_unique_task_point"

# Foreign key integrity
add_foreign_key :assessment_tasks, :assessments
add_foreign_key :assessment_task_points, :assessment_tasks, column: :task_id
add_foreign_key :assessment_task_points, :assessment_participations, column: :participation_id

Application-Level Invariants

InvariantEnforcement
Participation.total_points = sum(task_points.points)Automatic recomputation on TaskPoint save
TaskPoint.points ≤ Task.max_pointsValidation on save
Task records exist only if Assessment has tasksValidation
Results visible only when Assessment.results_published = trueController authorization
Participation.submitted_at persists across status changesNever overwritten after initial set

Multiple Choice Extension

For MC exam-specific constraints, see the Multiple Choice Exams chapter.


4. Student Performance & Certification

Database Constraints

# One performance record per (lecture, user)
add_index :student_performance_records,
          [:lecture_id, :user_id],
          unique: true,
          name: "idx_unique_performance_record"

# One certification per (lecture, user)
add_index :student_performance_certifications,
          [:lecture_id, :user_id],
          unique: true,
          name: "idx_unique_certification"

# Foreign key integrity
add_foreign_key :student_performance_certifications,
                :student_performance_records,
                column: :record_id,
                optional: true
add_foreign_key :student_performance_certifications,
                :student_performance_rules,
                column: :rule_id,
                optional: true

Application-Level Invariants

InvariantEnforcement
One Record per (lecture, user)Unique index
One Certification per (lecture, user)Unique index
Records store only factual data (points, achievements)No eligibility interpretation in Record model
Certifications store teacher decisions (passed/failed/pending)Status enum validation
Certification.status ∈ {passed, failed, pending}Enum validation
Campaigns cannot open with pending certificationsPre-flight validation in Campaign model
Campaigns cannot finalize with stale certificationsPre-finalization validation
Manual certification requires note fieldValidation when certified_by present
Record recomputation preserves existing CertificationsCertification stability—only flagged for review, not auto-updated
Certification certified_at timestamp immutableSet once, never changed (new certification created for updates)

Certification Lifecycle Invariants

PhaseInvariantDetails
Before RegistrationAll students have CertificationsPre-flight check blocks campaign opening
During RegistrationNo pending certifications existAll must be passed or failed
Runtime Policy CheckPolicy looks up Certification.statusNo runtime recomputation
Grade ChangeRecord recomputed, Certification flagged staleTeacher must review before next campaign
Rule ChangeAll Certifications flagged for reviewTeacher sees diff and must re-certify

5. Grade Schemes

Invariants

InvariantDetails
At most one active scheme per assessmentAssessment belongs_to :grade_scheme
Identical version_hash = no-opApplier checks hash before reapplication
Manual overrides preservedOverridden participations skipped during reapplication
Bands cover full rangeValidation ensures 0.0 to 1.0 coverage

6. Allocation Algorithm

Preference-Based (Flow Network)

InvariantDetails
Each user assigned to ≤ 1 itemFlow solver ensures exclusivity
Total assigned to item ≤ capacityCapacity constraint in network
Unassigned users get dummy edgeIf allow_unassigned = true
No partial writes on failureTransaction rollback on solver error

First-Come-First-Served

InvariantDetails
Submissions processed in timestamp orderOrdered query by created_at
Capacity checked atomicallyDatabase-level row locking
Concurrent submissions handled safelyPessimistic locking or retry logic

7. Policy Engine

Invariants

InvariantDetails
Policies evaluated in ascending position orderStable sort ensures deterministic evaluation
First failure short-circuitsRemaining policies not evaluated
No side effects on policy failureRead-only policy checks
Policy trace retained per requestFor debugging and audit purposes

8. Data Consistency Reconciliation

JobPurposeFrequency
RecountAssignedJobRecompute assigned_count from confirmed submissionsHourly
ParticipationTotalsJobVerify total_points matches sum of task pointsDaily
PerformanceRecordUpdateJobRecompute Records after grade changesAfter grade changes
CertificationStaleCheckJobFlag certifications for review when Records changeAfter record updates
OrphanTaskPointsJobDetect task points with missing participation/taskWeekly
RosterIntegrityJobCheck roster user counts vs. capacitiesDaily
AllocatedAssignedMatchJobVerify allocated_user_ids matches assigned users post-finalizationWeekly

9. Idempotency Patterns

OperationIdempotency Strategy
Campaign.finalize!Check status != :finalized before proceeding
materialize_allocation!Replace entire roster (not additive)
GradeScheme::Applier.apply!Compare version_hash; skip if unchanged
StudentPerformance::ComputationService.compute!Upsert pattern preserves overrides
Roster::MaintenanceService operationsEach operation atomic with validation

10. Security & Authorization

Access Control

These rules must be enforced via authorization layer (e.g., Pundit policies):

ResourcePermissionEnforcement
CampaignsCreate/modifyStaff only
PoliciesCreate/modifyStaff only
SubmissionsCreateUser for self, open campaign
RostersModifyStaff only via MaintenanceService
GradesEnter/modifyStaff/tutors only
Eligibility overridesSetStaff only with audit trail

11. Monitoring & Alerts

Key Metrics

MetricThresholdActionExplanation
Orphan submissions= 0Alert immediatelySubmissions without a valid registration_item_id indicate broken foreign keys or data corruption
Allocation failures (last 24h)> 0Alert staffFailed registration assignments need manual review; may indicate capacity or constraint issues
Count drift per item> 5Trigger recount jobDifference between assigned_count cache and actual roster count suggests cache staleness
Pending certifications during active campaigns> 0Alert staffCampaigns should not have pending certifications; blocks campaign operations
Stale certifications> 10% of totalAlert staffHigh staleness rate suggests Records are being recomputed but Certifications not reviewed
Performance record age during grading period> 48hTrigger recomputationStale Records mean certifications are based on outdated data

Count Drift Metric

The "count drift" metric compares the cached assigned_count field on registration items against the actual number of confirmed roster entries. A drift > 5 suggests the cache is out of sync with reality, which can happen after manual roster modifications or failed callbacks. The recount job refreshes these cached values.

Extra Points Allowed

Points exceeding task maximum are intentionally permitted to support extra credit scenarios and bonus points. This is not considered an error condition.


12. Audit Checklist

Use this checklist for manual verification:

  • Random sample: confirmed submission IDs match roster user IDs (for recently finalized campaigns)
  • Random sample: total_points matches sum(task_points.points) for assessments
  • All certifications with manual overrides have non-null note field
  • No pending certifications exist during active registration campaigns
  • All lecture students have Certifications before exam campaign opens
  • Registration policy position values are continuous (no gaps) per campaign
  • Roster changes have audit trail entries
  • No orphan task points (all reference valid participation + task)
  • Assigned users (registration data) match allocated users (roster data) after finalization
  • Certifications are not auto-updated when Records change (stability check)

Controller Architecture

This chapter outlines the controllers needed to implement the MÜSLI integration into MaMpf. Controllers are organized by functional area and follow Rails conventions with namespacing.

Overview

How to read this chapter

We do not expose a public API. "Primary caller" refers to who invokes the controller actions inside MaMpf: HTML forms (Turbo), background jobs, or teacher/editor UIs. Use these sections to wire views, jobs, and service objects to the right endpoints.

At a glance

NamespaceKey controllersPrimary caller
RegistrationCampaigns, UserRegistrations, Policies, AllocationTeacher/Editor UI, Student UI, Job
RosterMaintenanceTeacher/Editor UI
AssessmentAssessments, Grading, ParticipationsTeacher/Editor UI, Tutor UI
StudentPerformanceRecords, Certifications, EvaluatorTeacher/Editor UI
ExamExamsTeacher/Editor UI
GradeSchemeSchemesTeacher/Editor UI
DashboardDashboard, Admin::DashboardStudent UI, Teacher/Editor UI

Controllers are grouped into the following namespaces:

  • Registration: Campaign setup, student registration, allocation
  • Roster: Post-allocation roster maintenance
  • Assessment: Assessment setup, grading, result viewing
  • StudentPerformance: Performance records, teacher certification, evaluator proposals
  • Exam: Exam management
  • GradeScheme: Grading scheme configuration
  • Dashboard: Student and teacher/editor views

Turbo responses

Controllers render HTML plus Hotwire responses:

  • Turbo Frames for partial page replacement within a frame.
  • Turbo Streams for broadcasting or incremental updates. Prefer frames for scoped, request/response UI flows; prefer streams for updates triggered by background jobs or actions affecting multiple parts of the page.

Registration Controllers

Registration::CampaignsController

Purpose

Manage registration campaigns for lectures.

ControllerPrimary callersResponses
Registration::CampaignsControllerTeacher/Editor UIHTML, Turbo Frames/Streams

Actions

ActionPurpose
indexList all campaigns for a lecture
newForm to create a new campaign
createCreate campaign with divisions and policies
showView campaign details and status
editEdit campaign settings (before allocation)
updateUpdate campaign
destroyDelete campaign (if no registrations exist)

Responsibilities

  • CRUD operations for campaigns
  • Validate date ranges and capacity constraints
  • Display campaign status (draft, open, closed, processing, completed)
  • Pre-flight checks: Before opening campaigns with student_performance policies:
    • Verify all lecture students have StudentPerformance::Certification records
    • Ensure all certifications have status: :passed or :failed (no pending)
    • Block campaign opening with clear error message if checks fail
  • Pre-finalization checks: Before finalizing campaigns:
    • Re-validate certification completeness
    • Check for stale certifications (due to rule changes or record updates)
    • Prompt remediation workflow if issues exist

Registration::UserRegistrationsController

Purpose

Handle the student registration flow.

ControllerPrimary callersResponses
Registration::UserRegistrationsControllerStudent UIHTML, Turbo Frames/Streams

Actions

ActionPurpose
indexShow available campaigns for current user
newPre-flight eligibility check; show registration form if eligible, error message if not
createSubmit registration with ranked preferences (preference-based) or direct registration (FCFS)
showView registration status and assigned roster
editModify preferences (before allocation deadline)
updateUpdate preferences
destroyWithdraw from campaign

Responsibilities

  • Pre-flight eligibility check on page load via campaign.evaluate_policies_for(user, phase: :registration)
  • Conditionally render registration interface based on eligibility result
  • Display clear rejection reasons when policies fail (before user invests effort)
  • Display eligibility status based on StudentPerformance::Certification
  • Show certification status (passed/failed) for exam campaigns
  • Handle preference ranking (drag-and-drop or priority input)
  • Show allocation results after campaign completes
  • Validate registration constraints

Registration::PoliciesController

Purpose

Admin interface for managing registration policies.

ControllerPrimary callersResponses
Registration::PoliciesControllerTeacher/Editor UIHTML, Turbo Frames/Streams

Actions

ActionPurpose
indexList all policies for a campaign
newCreate policy form
createCreate eligibility or allocation policy
editModify policy
updateUpdate policy
destroyRemove policy

Responsibilities

  • Select policy type (eligibility vs allocation scoring)
  • Configure policies (score thresholds, enrollment requirements, etc.)
  • Policy preview/testing interface

Registration::AllocationController

Purpose

Trigger and monitor the allocation algorithm.

ControllerPrimary callersResponses
Registration::AllocationControllerTeacher/Editor UI, JobHTML, Turbo Frames/Streams

Actions

ActionPurpose
showView allocation status and preview
createTrigger allocation algorithm
retryRe-run allocation with adjusted parameters
finalizeCommit allocation results to rosters
allocate_and_finalizeCompute allocation and immediately finalize

Responsibilities

  • Run allocation algorithm as background job
  • Display allocation statistics (satisfaction rate, unassigned students)
  • Allow parameter adjustments before finalization
  • Create rosters from allocation results
  • Support single-step allocate_and_finalize flow when desired
  • Delegate to Campaign API: allocate!, finalize!, allocate_and_finalize!
  • Validate certification completeness before finalization

Roster Controllers

Roster::MaintenanceController

Purpose

Handle post-allocation roster changes.

ControllerPrimary callersResponses
Roster::MaintenanceControllerTeacher/Editor UIHTML, Turbo Frames/Streams

Actions

ActionPurpose
indexOverview of all rosters for a lecture
showView specific roster with participants
editModify roster metadata (e.g., tutor/time/place)
updateSave roster metadata changes; perform move/add/remove
moveMove students between rosters

Responsibilities

  • Manual roster adjustments
  • Move participants between groups
  • Tutor reassignment
  • Capacity override
  • Does not re-run the automated solver or reopen the campaign

Assessment Controllers

Assessment::AssessmentsController

Purpose

Configure assessments for lectures.

ControllerPrimary callersResponses
Assessment::AssessmentsControllerTeacher/Editor UIHTML, Turbo Frames/Streams

Actions

ActionPurpose
indexList all assessments for a lecture
newCreate assessment form
createCreate assessment with parameters
showView assessment details
editModify assessment settings
updateUpdate assessment
destroyDelete assessment (if no grades exist)
publish_resultsPublish results to students

Responsibilities

  • Create and configure assessments
  • Set max points, weight, thresholds
  • Configure eligibility contribution
  • Link to specific divisions or entire lecture
  • Control visibility lifecycle (publish/unpublish results)

Assessment::GradingController

Purpose

Enter and manage grades.

ControllerPrimary callersResponses
Assessment::GradingControllerTutor UI, Teacher/Editor UIHTML, Turbo Frames/Streams

Actions

ActionPurpose
showGrading interface for an assessment
updateBulk update grades for multiple students
exportExport grades as CSV
importImport grades from CSV

Responsibilities

  • Display grading table (filterable by roster/division)
  • Bulk grade entry
  • Validate grade values (0 to max_points)
  • Calculate derived metrics (percentages, pass/fail)

Assessment::ParticipationsController

Purpose

Student view of assessment results.

ControllerPrimary callersResponses
Assessment::ParticipationsControllerStudent UIHTML, Turbo Frames/Streams

Actions

ActionPurpose
indexList all assessments student can view
showView grades and feedback for specific assessment

Responsibilities

  • Display personal grades
  • Show aggregate statistics (if configured)
  • Feedback and comments from graders

Exam Controllers

ExamsController

Purpose

Manage exam instances for lectures.

ControllerPrimary callersResponses
ExamsControllerTeacher/Editor UIHTML, Turbo Frames/Streams

Actions

ActionPurpose
indexList all exams for a lecture
newCreate exam form
createCreate exam
showView exam details and certification summary
editModify exam settings
updateUpdate exam
destroyDelete exam

Responsibilities

  • Exam scheduling (date, location)
  • Registration deadline management
  • Display certification summary (number of students passed/failed)
  • Link to certification dashboard for detailed review
  • Export eligible student list (students with status: :passed)

Student Performance Controllers

StudentPerformance::RecordsController

Purpose

View and manage performance records (factual data only).

ControllerPrimary callersResponses
StudentPerformance::RecordsControllerTeacher/Editor UIHTML, Turbo Frames/Streams

Actions

ActionPurpose
indexList performance records for all students
showDetailed performance breakdown for one student
recomputeTrigger recomputation for specific student(s)
exportExport performance data

Responsibilities

  • Display factual performance data (points, achievements)
  • Show computation timestamp and rule version
  • Trigger manual recomputation when needed
  • Export performance data for analysis
  • No eligibility interpretation - pure factual data display

StudentPerformance::CertificationsController

Purpose

Teacher certification workflow for eligibility decisions.

ControllerPrimary callersResponses
StudentPerformance::CertificationsControllerTeacher/Editor UIHTML, Turbo Frames/Streams

Actions

ActionPurpose
indexCertification dashboard showing all students
createBulk create certifications from proposals
updateManual override of individual certification
bulk_acceptAccept all proposals in one action
remediateHandle stale/pending certifications before campaign operations
exportExport certification list

Responsibilities

  • Display certification dashboard with proposals
  • Show proposed status (passed/failed) for each student
  • Bulk accept proposals (common case)
  • Manual override with required note field
  • Handle rule change diff: "12 students would change: failed → passed"
  • Remediation workflow for pending/stale certifications
  • Verify completeness before allowing campaign operations
  • Track certification history (certified_at, certified_by)

StudentPerformance::EvaluatorController

Purpose

Generate eligibility proposals using the Evaluator service.

ControllerPrimary callersResponses
StudentPerformance::EvaluatorControllerTeacher/Editor UIHTML, Turbo Frames/Streams

Actions

ActionPurpose
bulk_proposalsGenerate proposals for all lecture students
preview_rule_changeShow impact of rule modifications before saving
single_proposalGenerate proposal for specific student (review case)

Responsibilities

  • Call StudentPerformance::Evaluator.bulk_proposals(lecture)
  • Display proposals in reviewable format
  • Show rule change preview: affected students and status changes
  • Generate single proposal for manual review cases
  • Does not create Certifications - only generates proposals for teacher review

Grade Scheme Controllers

GradeScheme::SchemesController

Purpose

Configure grading schemes for courses.

ControllerPrimary callersResponses
GradeScheme::SchemesControllerTeacher/Editor UIHTML, Turbo Frames/Streams

Actions

ActionPurpose
indexList all schemes for a course
newCreate grading scheme
createSave scheme with thresholds
editModify scheme
updateUpdate scheme
applyApply scheme to assessment results
previewPreview grade distribution
  • Define grade thresholds (1.0, 1.3, 1.7, ..., 5.0)
  • Configure bonus points and rounding policies
  • Preview grade distribution before finalizing
  • Apply scheme to generate final grades

Dashboard Controllers

DashboardController

Purpose

Student-facing dashboard.

ControllerPrimary callersResponses
DashboardControllerStudent UIHTML, Turbo Frames/Streams

Actions

ActionPurpose
showOverview of all enrollments, registrations, grades

Responsibilities

  • Display active campaigns requiring action
  • Show roster assignments
  • List assessment results
  • Display exam eligibility status

Admin::DashboardController

Purpose

Teacher and editor dashboard.

ControllerPrimary callersResponses
Admin::DashboardControllerTeacher/Editor UIHTML, Turbo Frames/Streams

Actions

ActionPurpose
showOverview of all campaigns, rosters, assessments for managed lectures

Responsibilities

  • Quick access to campaign management
  • Roster statistics and allocation quality metrics
  • Grading progress tracking
  • Exam eligibility overview

RESTful Design Principles

RESTful design

All controllers follow Rails conventions:

  • Use standard REST actions where possible.
  • Nest resources appropriately (lectures/:id/campaigns).
  • Use member vs collection routes correctly.
  • Render HTML and Hotwire responses (Turbo Frames/Streams); no public JSON API.

Authorization

Authorization

Controllers integrate with CanCanCan abilities:

  • load_and_authorize_resource for standard CRUD.
  • Custom checks for special actions (allocation, grading).
  • Role-based access (student, tutor, teacher, editor, admin).

Error Handling

Error handling

Controllers should handle:

  • ActiveRecord validation errors (display form errors).
  • Authorization failures (redirect with flash message).
  • Background job failures (show status and retry option).
  • Constraint violations (e.g., deleting campaign with registrations).

View Architecture

This chapter outlines view conventions and examples for the MÜSLI integration. It pairs with the Controller Architecture chapter and focuses on HTML ERB views, Hotwire (Turbo Frames/Streams), and ViewComponents usage.

How to read this chapter

This chapter specifies view conventions and key screens by feature area. It complements Controllers: prefer HTML + Turbo (Frames/Streams) with server-rendered ERB and minimal JS via Stimulus.

At a glance

AreaKey views/componentsPrimary callers
RegistrationCampaigns (index/show/forms), Student RegistrationTeacher/Editor UI, Student UI
RosterMaintenance (index/show/edit)Teacher/Editor UI
AssessmentAssessments (CRUD), Grading table, Grade schemes, ParticipationsTeacher/Editor UI, Tutor UI, Student
Student PerformanceCertification Dashboard, Performance Records, EvaluatorTeacher/Editor UI, Student UI
ExamExams (CRUD), Exam RosterTeacher/Editor UI
DashboardStudent dashboard, Teacher/Editor dashboardStudent UI, Teacher/Editor UI

The feature sections below (Registration Screens, Rosters, Assessments, Exams, Grade Schemes) include two tables:

  • A Screens table to summarize each screen's purpose, main UI parts, and interaction model at a glance. It helps designers and developers align on scope and Hotwire usage, and links to static mockups when available.
  • A Controller/action mapping table to tie those screens to concrete Rails endpoints and roles. It clarifies routing, authorization, and which actions are invoked from each screen. The following keys apply:

Table keys (all sections)

Screens tables:

  • View: page/screen name.
  • Key elements: main UI parts.
  • Hotwire: Frames/Streams used on this screen (when listed).
  • Mockup: link to static HTML when available.

Mapping tables:

  • View: the screen the row refers to (links to mockup if present).
  • Role: actor (Teacher/Editor, Tutor, Student) when access differs.
  • Controller: Rails controller handling the request.
  • Actions: controller actions called from the view.
  • Scope/Notes: brief intent or constraints.

Conventions

  • Templates: ERB (.html.erb).
  • Components: ViewComponents in app/frontend/_components/ or feature folders.
  • Hotwire: Frames vs Streams choice is deferred; decide per screen later.
  • Stimulus: Use .controller.js suffix; colocate with feature folder under app/frontend/.
  • Styling: SCSS; colocate per feature when practical.

Partials vs ViewComponents

Decision guide

Start simple. Use a partial first. Promote to a ViewComponent when the fragment becomes reusable, complex, or needs its own tests/JS/styles.

Use a partial when...Use a ViewComponent when...
It is local to a single page/featureIt is reused across pages/features
Presentation-only, minimal branchingEncapsulates logic/variants/states
Small fragment (row, cell, inline form field)Owns JS/CSS (Stimulus) or wraps a reusable Frame
Minimal/no Stimulus behaviorNeeds a stable API (kwargs/slots)
No dedicated unit tests neededDeserves unit tests and composition via slots
No caching/memoization requiredWill benefit from caching/memoization

Turbo in views

Both partials and components can live inside Turbo Frames. If background jobs will stream updates to the same fragment in several contexts, prefer a component and render it from stream templates. For one-off stream responses, a partial is fine.

Placement & API

Partials: colocate near the parent view (e.g., app/frontend/registration/.../_row.html.erb) and pass explicit locals.

ViewComponents: place in app/frontend/_components/ or feature-specific folders. Prefer keyword args and slots for a clear contract.

Migration path

Extract a growing partial into a ViewComponent without changing callers. Keep the component API narrow and clear via initializer and slots.

Layout & Partials

  • Use partials for reusable fragments (tables, forms, flash bars).
  • Extract repeated frame shells (table headers, pagination) into partials.
  • Keep forms server-rendered; augment with Stimulus when needed.

Mockups

Mockups

Preview static screens while wiring controllers and models. Mockup links also appear in the per-feature tables below. All mockups are styled with Bootstrap 5 (via CDN) to match the app's component library for faster transfer from mockup to real views.

Yellow underlined rows in tables visualize an inline edit state of the preceding white row. Mockups may show both side-by-side to illustrate the edit UI; in the real UI, only one would be visible at a time.

For a complete index of all mockups organized by feature area, see Mockups Index.

Registration Screens

Campaigns (Teacher/Editor)

Screens

See Table keys above for column meanings.

Settings live in Show

All campaign settings are edited inline on the Show page via the Settings tab. There is no separate Settings page.

Note

When a campaign is completed, the Settings tab is read-only. The "Planning-only" option is visible but disabled. In Draft/Open, the "Planning-only" option can be toggled; enabling it hides finalization paths in the UI (Allocation/Finalize).

ViewKey elementsMockup
Index (Lecture)Minimal table for a single lecture; status chipsMockup
Index (Current term, grouped)Grouped by lecture for the teacher/editor; no search neededMockup
Show (Exam – FCFS)Summary panel; tabs: Overview, Settings, Items, Policies, Registrations, Allocation; FCFS shows certification statusMockup
Show (Exam – FCFS, draft with incomplete certifications)Draft exam campaign with warning banner showing 24 incomplete certifications; pre-flight checks panel; highlighted certification policy; collapsible details; link to Certification DashboardMockup
Show (Tutorials – FCFS, open)Summary panel; tabs: Overview, Settings, Items, Policies, Registrations, AllocationMockup
Show (Tutorials – preference-based, open)Summary panel; tabs: Overview, Settings, Items, Policies, Registrations, Allocation; preference-based shows preferencesMockup
Show (Tutorials – preference-based, completed)Summary panel; tabs: Overview, Settings, Items, Policies, Registrations, Allocation; preference-based shows preferencesMockup
Show (Interest – draft)Summary panel; tabs: Overview, Settings, Items, Policies, Registrations, AllocationMockup
Forms (Items & Policies tabs)Inline create/edit for items and policiesSee Show mockups (tabs)
Pre-flight error modal (exam campaign, registration-phase policy)Modal shown when attempting to open exam campaign with registration-phase student_performance policy and incomplete certifications; blocks opening until all lecture students have finalized certification status; scrollable table of 24 affected students with names, matriculation numbers, performance points/percentage, status (pending/missing); links to Certification Dashboard for batch resolutionMockup

Flow

flowchart LR
  subgraph TEACHER_EDITOR [Teacher/Editor]
    CIDX[Index] --> CNEW[New]
    CIDX --> CSHW[Show]
    CSHW --> OVW[Overview tab]
    CSHW --> SET[Settings tab]
    CSHW --> ITM[Items tab]
    CSHW --> POL[Policies tab]
    CSHW --> REGS[Registrations tab]
    CSHW --> ALLOCT[Allocation tab]
    CSHW --> CLOSE[Close registration]
    CLOSE --> ALLOC[Run allocation]
    ALLOC --> FIN[Finalize]
  end

Controller and actions mapping (teacher/editor)

Surface/ControlControllerAction(s)PreconditionsNotes
IndexRegistration::CampaignsControllerindexList campaigns for a lecture
ShowRegistration::CampaignsControllershowOverview with tabs
New/Edit/Delete (campaign)Registration::CampaignsControllernew, create, edit, update, destroyDestroy only if no registrationsManage metadata and dates
Open registrationRegistration::CampaignsControlleropenDraft onlyStatus: draft → open → closed → processing → completed
Close registrationRegistration::CampaignsControllercloseOpen onlyStop intake: open → processing
Reopen registrationRegistration::CampaignsControllerreopenProcessing onlyBefore finalization: processing → open
Policies tab (CRUD)Registration::PoliciesControllerindex, new, create, edit, update, destroyManage registration policies
Allocation — showRegistration::AllocationControllershowAllocation status/progress
Allocation — runRegistration::AllocationControllercreateProcessing; not planning-onlyTrigger allocation
Allocation — retryRegistration::AllocationControllerretryAfter failureRetry failed run
Allocation — finalizeRegistration::AllocationControllerfinalizeAllocation ready; not planning-onlyMaterialize results
Allocation — allocate+finalizeRegistration::AllocationControllerallocate_and_finalizeShortcutOne-step path

Student Registration

Screens

ViewKey elementsMockup
Index (tabs)Tabs: Courses & seminars, Exams; global filters with per-tab scoping; groups: Open, Closed (you registered), Closed (not registered)Mockup
Show (preference-based)Rank-first preferences (K=6, M=5), searchable catalog with pagination, add/remove/reorder ranks, save statusMockup
Show (FCFS)Register/Withdraw for whole course (e.g., seminar), live seat counters, async save with statusMockup
Show (FCFS – tutorials)Choose a specific tutorial; per-group capacity/filled, disabled when full; async save with statusMockup
Show (FCFS – exam)Exam seat registration; date/time/location details; register/withdraw; hall capacity info; async save with status; certification status badge (passed/failed/pending)Mockup
Show (FCFS – exam; action required: institutional email)Registration gated by campaign policy; example shown: institutional email domain. Page links to fulfill the requirement; Register enabled once satisfiedMockup
Show (FCFS – exam; failed certification)Registration blocked by failed certification. Error message: "Cannot register: You have not met the student performance requirements." Link to Performance Overview. No Register button.Mockup
Show (FCFS – exam; pending certification)Registration blocked by pending certification. Warning message: "Cannot register yet: Your certification is pending teacher review." Check back message. No Register button.Mockup
Confirmation (result)Completed registration outcome; shows assignment (e.g., Tutorial group C) and preference summaryMockup

Flow

flowchart LR
  subgraph Student
    IDX[Index] --> SHW[Visit campaign page]
    SHW --> ELIG{Eligible?}
    ELIG -->|No - Policy failed| ERR[Show error with reason]
    ELIG -->|No - Action required| REQS[Show action required page]
    REQS --> FULFILL[Fulfill requirement]
    FULFILL --> SHW
    ELIG -->|Yes| MODE{Mode}
    MODE -->|Preference-based| FORM_PREF[Show preference form]
    FORM_PREF --> RANK[Rank preferences]
    RANK --> SUBMIT[Submit]
    SUBMIT --> CONF_PREF[Confirmation submitted]
    CONF_PREF -.-> ALLOC[Allocation results after close]
    MODE -->|FCFS| FORM_FCFS[Show register buttons]
    FORM_FCFS --> REG[Click register/withdraw]
    REG --> CONF_FCFS[Confirmation enrolled/withdrawn]
  end

  subgraph TeacherEditor
    MGT[Manage Campaigns]
  end

Note

Teacher/Editor “Manage Campaigns” configures mode, policies, and dates that govern the Student flow. It does not imply a navigation path to the Student “Show”.

Controller and actions mapping (student)

Surface/ControlControllerAction(s)PreconditionsNotes
Index (tabs)Registration::UserRegistrationsControllerindexTabs: Courses & seminars, Exams; filters: Status, Registration, Semester
Show (preference-based)Registration::UserRegistrationsControllershowRank-first page
Preferences — editRegistration::UserRegistrationsControllereditRenders editor in-page
Preferences — saveRegistration::UserRegistrationsControllerupdateValid ranks onlyPersists and re-renders
Show (FCFS — course)Registration::UserRegistrationsControllershowEnroll/withdraw context
Register (FCFS course)Registration::UserRegistrationsControllerupdatePolicy checks pass; seats available
Choose tutorial + register (FCFS)Registration::UserRegistrationsControllerupdateSeats availableMulti-item picker
Register (exam)Registration::UserRegistrationsControllerupdateRequirements metPolicy-gated; show required actions per campaign policy
WithdrawRegistration::UserRegistrationsControllerdestroyOnly when registered
Confirmation (result)Registration::UserRegistrationsControllershowAfter submit/closeShows assignment and summary
Fulfill requirements (policy)Policy-configured flowExternal or internalFollow instructions to satisfy policy, then retry

Rosters

Flow

flowchart LR
  OVR[Overview] --> DET[Detail]
  DET --> EDITR[Edit]
  OVR --> ASSIGN[Assign candidate]
  OVR --> ADD[Add student]
  OVR --> DELETE[Delete empty group]
ViewKey elementsMockup
OverviewList/table of groups with Tutor/Time/Place; search/filter; per-row capacity meter; Manage action; right-side “Candidates from campaign” panel (unassigned only) with search, top-3 preferences, Assign to…; capacity guard. For exams, the candidates panel is not shown.Tutorials; Seminar; Exam
DetailParticipants table with search; remove/move; capacity guardTutorial; Seminar; Exam; Tutor (read-only)

Controller and actions mapping (teacher/editor)

Surface/ControlControllerAction(s)PreconditionsNotes
OverviewRoster::MaintenanceControllerindexOverview across rosters; candidates panel (unassigned only)
Show (Detail)Roster::MaintenanceControllershowParticipants table; capacity info
Edit/Update (roster metadata)Roster::MaintenanceControlleredit, updateInline edit frame; persist changes
Assign candidate (from Overview)Roster::MaintenanceControllerupdateCapacity availableAdd participant from candidates panel
Move participant (in Detail)Roster::MaintenanceControllerupdateCapacity availableChange group for a participant
Remove participant (in Detail)Roster::MaintenanceControllerupdateRemove a student from the roster
Delete empty rosterRoster::MaintenanceControllerdestroyOnly when emptyDelete action from Overview

Controller and actions mapping (tutor)

Surface/ControlControllerAction(s)PreconditionsNotes
Show (Detail)Roster::MaintenanceControllershowRead-only for own groups (if permitted)
No access to edit/update/destroy

Assessments

Context-Specific Views

Assessment views differ between regular lectures and seminars:

  • Lectures: Show assignments and exams; "New Assessment" button with dropdown
  • Seminars: Show talks only; no "New Assessment" button (talks created via Content tab); inline grading interface

Assessments (Lectures - Teacher/Editor)

Screens

See Table keys above for column meanings.

Lecture Context Only

The views below apply to regular lectures. For seminar-specific views, see the Seminars section.

ViewKey elementsMockup
IndexList of assignments and exams with status/type badges; filter tabs; progress indicators; "New Assessment" buttonMockup
Index (End of Semester)Same as Index, showing complete semester timeline: 8 graded assignments, midterm exam graded, final exam in progressMockup
NewForm with type dropdown (Assignment/Exam); dual-mode support (Pointbook/Gradebook); dynamic task management; schedule settingsMockup
Show (Assignment - Open)Tabbed interface (Overview/Settings/Tasks/Participants); submission progress tracking; before grading startsMockup
Show (Assignment - Closed)Tabbed interface (Overview/Settings/Tasks/Tutorials/Grading/Statistics); submission progress; tutorials publication management; grading table with filters and sortingMockup
Show (Exam - Draft)Tabbed interface (Overview/Settings/Tasks/Exam Logistics/Participants); configuration and setup phaseMockup
Show (Exam - Closed)Tabbed interface (Overview/Settings/Tasks/Exam Logistics/Participants); grading in progress; tutor assignment trackingMockup
Show (Exam - Graded)Tabbed interface with Statistics tab; grade distribution; results publication status; average scores per questionMockup

Flow

flowchart LR
  subgraph TEACHER_EDITOR [Teacher/Editor]
    SETUP[Setup] --> GR[Grading]
    GR --> PUB[Publish results]
  end
  subgraph TUTOR [Tutor]
    GRT[Grading entry]
  end
  subgraph STUDENT [Student]
    RES[Results]
  end
  PUB --> RES

Controller/action mapping (role-specific)

RoleControllerActionsScope
Teacher/EditorAssessment::AssessmentsControllerindex, new, create, show, edit, update, destroySetup
Teacher/EditorAssessment::AssessmentsControllerpublish_resultsVisibility lifecycle
Teacher/EditorAssessment::GradingControllershow, update, export, importGrading + bulk ops
TutorAssessment::GradingControllershow, updateGrading (enter/update points)
TutorAssessment::AssessmentsControllerindex, showRead-only
StudentAssessment::ParticipationsControllerindex, showOwn results (when published)

Assessments (Lectures - Tutor)

Screens

Team-Based Grading

Tutors grade their tutorial's teams for assignments. Points entered once per team are automatically applied to all team members. The interface works for both digital and paper submissions.

ViewKey elementsMockup
Grading (Tutorial)Team-based grading table; per-task point inputs; progress indicator; filter by graded/not graded; submission links; auto-calculated totalsMockup

Flow

flowchart LR
  subgraph TUTOR [Tutor]
    TUT[Tutorial view] --> GRADE[Grading page]
    GRADE --> ENTER[Enter points per task]
    ENTER --> SAVE[Save team grade]
    SAVE --> NEXT{More teams?}
    NEXT -->|Yes| ENTER
    NEXT -->|No| COMPLETE[Mark complete]
  end

Controller/action mapping

RoleControllerActionsScope
TutorAssessment::GradingControllershowDisplay grading table for tutorial
TutorAssessment::GradingControllerupdateSave points for one team (creates TaskPoints for all members)

Team Grading Service

The backend uses Assessment::TeamGradingService to propagate points from team input to individual Assessment::TaskPoint records for each team member. This ensures consistent grading within teams while maintaining per-user granularity for reporting.

Assessments (Lectures - Exam Grading Workflow)

Phase-Based Workflow

Exam grading uses a multi-phase workflow designed for paper-based exams where points are entered in batch, then a grade scheme is created based on the actual distribution.

Backend Architecture

For grade scheme data models, services, algorithms, and implementation details, see Grading Schemes.

The exam grading workflow progresses through four phases:

  1. Phase 1: Point Entry — Teachers enter task points for each student; grade column remains empty
  2. Phase 2: Distribution Analysis — View histogram, statistics, and percentiles of achieved points
  3. Phase 3: Scheme Configuration — Set excellence/passing thresholds or manually define grade boundaries
  4. Phase 4: Scheme Applied — Grades auto-computed; point edits auto-update grades

Screens

ViewKey elementsMockup
Phase 1: Point EntryEditable task point inputs; empty grade column with "—"; Grade Scheme tab disabled (tooltip: "Complete point entry first"); progress alert showing X/N gradedMockup
Phase 2: Distribution AnalysisGrade Scheme tab active with "New" badge; CSS histogram with 10 bars; statistics card (min/max/mean/median/std dev); percentiles card (10th-90th); "Create Grade Scheme" buttonMockup
Phase 3: Scheme ConfigurationInline configuration card; two-mode tabs (Two-Point Auto / Manual Curve); threshold inputs (Excellence: 1.0, Passing: 4.0); "Auto-Generate Bands" button; generated bands preview table with 9 grades; pass rate calculation; "Save as Draft" buttonMockup
Phase 4: Scheme AppliedGrading tab showing computed grades; grade cells have blue background with calculator icon; alert explaining auto-update behavior; all 145 students with final grades; success message with publish promptMockup

Two-Point Auto Algorithm

The Two-Point Auto mode simplifies grade scheme creation:

  1. Set Excellence threshold (e.g., 54 pts = 1.0) — Students at or above this score receive grade 1.0
  2. Set Passing threshold (e.g., 30 pts = 4.0) — Minimum score to pass; below this is 5.0 (fail)
  3. System auto-generates intermediate bands (1.3, 1.7, 2.0, 2.3, 3.0, 3.7) with equal intervals
  4. Preview shows point ranges, student count, and percentage per grade band
  5. Pass rate calculated automatically (students with 4.0 or better / total)

Grade Auto-Update Behavior

After applying a grade scheme:

  • Grades are computed automatically from total points
  • If teacher edits any task points, grade recalculates immediately
  • Grade column shows blue background + calculator icon to indicate computed value
  • Manual override possible (triggers warning, marks as overridden)

Flow

flowchart LR
  subgraph TEACHER_EDITOR [Teacher/Editor]
    P1[Phase 1: Enter points] --> P2[Phase 2: Analyze distribution]
    P2 --> P3[Phase 3: Configure scheme]
    P3 --> P4[Phase 4: Grades computed]
    P4 --> PUB[Publish results]
  end

Single Scheme Version

Only one active scheme exists per exam. No version history tracking. If teacher needs to adjust scheme, they edit the existing one.

Manual Curve Mode

For advanced users, Manual Curve mode allows direct control of each grade boundary by dragging markers on the histogram or editing the boundary table directly.

Controller Reference

Grade scheme functionality is implemented in GradeScheme::SchemesController with actions: index, new, create, edit, update, preview, and apply. See Controller Architecture for details.

Assessments (Lectures - Student)

Student Results Views

Students can view their published assignment results, including overall progress and detailed feedback on individual assignments. Results are only visible after tutors publish grades.

Screens

ViewKey elementsMockup
Results OverviewTwo-column layout: left sidebar with progress summary (large 80% display, points 192/240, graded 6/8, average 24/30), certification status card (passed/failed/pending with link to Performance Overview), filter buttons (All/Graded/Pending); right column with compact assignment list (condensed cards showing title, dates, score, view button), collapsible section for older assignmentsMockup
Results Detail (Assignment)Compact single-page layout with assignment header (title, dates, grader, score 28/30), condensed team info (single row), simple task breakdown table (just task numbers and points, no descriptions or percentages), optional short tutor comment, submitted files (student's submission + tutor's correction PDF), progress sidebar (overall points 192/240, certification status), action buttonsMockup
Results Detail (Exam)Single-page layout with exam header (title, date/location, grader info). Large grade display (1.3) with pass/fail badge. Score display (55.0/60, 92%). Task breakdown table showing points per task. Full grading scheme table with your grade highlighted. Optional grader comment. Registration info. Class statistics (average, highest/lowest, pass rate). Lecture performance status sidebar. Download certificate button.Mockup

Flow

flowchart LR
  subgraph STUDENT [Student]
    A[Results Overview<br/>Index Page] -->|Click View Details| B[Results Detail<br/>Show Page]
    B -->|Back Button| A
    A -->|Filter: All/Graded/Pending| A
    A -->|Expand Older Assignments| A
    B -->|Download Feedback PDF| B
    B -->|View Assignment Page| C[Assignment Details]
  end

Controller/action mapping

RoleControllerActionsScope/Notes
StudentAssessment::ParticipationsControllerindex, showOwn results (when published). Students can view their assignment results and detailed feedback only after tutors publish them. Results include overall progress tracking, certification status, per-task breakdown, and tutor feedback.

Assessments (Seminars - Teacher/Editor)

Screens

Streamlined Grading

Seminars show only talks with inline grading for fast workflow. Talks are created via the Content tab, not the Assessments tab.

ViewKey elementsMockup
Index (Seminar)List of talks with inline grading; columns: Title, Speaker(s), Grade (inline dropdown), Status, Actions; no "New Assessment" button; help text: "Talks are created in the Content tab"Mockup
Show (Talk)Tabbed interface (Overview/Settings/Participants); final grade display; speaker details; feedback notesMockup

Flow

flowchart LR
  subgraph TEACHER_EDITOR [Teacher/Editor]
    CONTENT[Content tab: Create talk] --> AUTO[Auto-create assessment]
    AUTO --> ASSESS[Assessments tab: View list]
    ASSESS --> INLINE[Inline grade]
    INLINE --> DETAIL[Optional: Click for details]
    DETAIL --> FEEDBACK[Add feedback]
  end

Controller/action mapping

RoleControllerActionsScope
Teacher/EditorAssessment::AssessmentsControllerindex, showRead-only list; inline grading
Teacher/EditorAssessment::GradingControllerupdateSave inline grade
Teacher/EditorAssessment::AssessmentsControllershow (detail view)Add feedback notes
Teacher/EditorAssessment::AssessmentsControllerpublish_resultsVisibility lifecycle

No Creation Actions

Seminars do not expose new, create, or destroy actions in the Assessments tab. Talks (and their assessments) are managed via the Content tab.

Student Performance & Certification

Context

Lecture performance certification is a three-layer system:

  1. Records (factual data): Points and achievements computed from assessments
  2. Evaluator (proposals): Teacher-facing tool to generate pass/fail proposals
  3. Certification (decisions): Teacher-approved status (passed/failed/pending)

Students can only register for exams if they have a passed certification.

Student Performance (Teacher/Editor)

Three Distinct Views

Teachers interact with three separate interfaces:

  1. Performance Records (factual data): View computed points and achievements for all students
  2. Certification Dashboard (decision-making): Review proposals, bulk accept, manual override
  3. Rule Configuration (criteria setup): Define thresholds for automatic proposals

Screens

ViewKey elementsMockup
Performance Records IndexRead-only factual data view. Table showing all lecture students (150) with columns: student name, matriculation, tutorial group, total points (computed), percentage, achievements completed (checkmarks), last computed timestamp. Filter by tutorial group. Search by name/matriculation. Recompute button (triggers background job). No override or status columns (this is pure data). Export list button. Pagination.Mockup
Certification DashboardDecision-making interface. Summary cards (total students, passed count, failed count, pending count, stale count). Rule info alert (current thresholds: 50% points + 2 achievements). Filter buttons (All/Passed/Failed/Pending/Stale). Search by name. Table with columns: student name, matriculation, current points/achievements (from Records), proposed status (from Evaluator), certification status (from Certification table), override note (if manual), actions (Accept Proposal/Override). Bulk actions: "Accept All Proposals" button, "Mark Selected as Passed/Failed". Remediation alert if pending certifications block campaign. Export list button. Pagination. Includes manual override modal.Mockup
Rule Configuration (inline on Lecture Settings)Configuration card with tabs: "Percentage-based (Recommended)" and "Absolute Points". Percentage tab: min percentage input (e.g., 50%), required achievements checkboxes (with type badges). Absolute tab: min points input, achievement checkboxes. Preview button shows impact modal with summary stats. Save button. Alert: "Changing criteria will generate new proposals. Review in Certification Dashboard."Mockup
Rule Change Preview ModalTriggered when saving rule changes. Side-by-side rule comparison (current vs new). Summary cards: total students, would pass (+12), would fail (-12), status changes (12). Alert: "12 students would change status". Diff table showing affected students: columns (student name, matriculation, current points/achievements, current proposal, new proposal, change indicator with arrow icon). Actions: "Apply Changes" (updates proposals only, teacher must review), "Cancel". Warning: manual overrides preserved, proposals regenerated, teacher review required.Mockup
Certification Remediation Modal (finalization)Triggered when teacher tries to finalize campaign that has a finalization-phase student_performance policy with pending certifications among confirmed registrants. Allows inline resolution of pending certifications during finalization workflow. Warning alert: "6 students have pending certifications. Resolve to continue finalization." Table showing only confirmed registrants with pending status, columns: checkbox, student name, matriculation, points, percentage, achievements, proposed status (from Evaluator), quick-resolve dropdown (pre-filled based on proposal: passed/failed), note field (optional). Bulk resolve buttons: "Mark All as Passed", "Mark All as Failed". Info alert explaining consequences (passed → added to roster, failed → auto-rejected from exam). Confirmation checkbox required to enable submit. Actions: "Cancel Finalization" (blocks finalization, returns to campaign), "Apply & Retry Finalization" (saves certifications, re-runs finalization check with auto-rejection of failed students).Mockup

Flow

flowchart LR
  subgraph Setup
    RC[Rule Configuration] --> PREVIEW[Preview Impact]
    PREVIEW --> SAVE[Save Rule]
    SAVE --> DIFF[Rule Change Diff Modal]
    DIFF --> GEN[Generate New Proposals]
  end

  subgraph Certification
    CD[Certification Dashboard] --> BULK[Bulk Accept Proposals]
    CD --> OVERRIDE[Manual Override]
    BULK --> CERT[Update Certifications]
    OVERRIDE --> CERT
  end

  subgraph Exam_Campaign
    CAMP[Campaign Settings] --> OPEN{Open Campaign}
    OPEN -->|Missing Certs| ERROR[Hard Fail: Complete Certifications]
    ERROR --> CD
    OPEN -->|All Complete| REG[Registration Open]
  end

  subgraph Finalization
    FIN[Finalize Campaign] --> CHECK{Check Certs}
    CHECK -->|Pending| REM[Remediation Modal]
    REM --> RESOLVE[Resolve Pending]
    RESOLVE --> CHECK
    CHECK -->|All Passed/Failed| MAT[Materialize Rosters]
  end

  GEN --> CD
  CERT --> CAMP

Controller/action mapping

RoleControllerActionsScope/Notes
Teacher/EditorStudentPerformance::RecordsControllerindex, show, recomputeRead-only factual data; trigger background recomputation
Teacher/EditorStudentPerformance::CertificationsControllerindex, create, update, bulk_acceptCertification dashboard; bulk accept proposals; manual override with notes; remediation workflow for pending
Teacher/EditorStudentPerformance::EvaluatorControllerbulk_proposals, preview_rule_change, single_proposalGenerate proposals (does NOT create Certifications); preview rule change impact
Teacher/EditorStudentPerformance::RulesControlleredit, updateConfigure thresholds; preview shows diff before save

Evaluator Does Not Create Certifications

The Evaluator only generates proposals. Teachers must explicitly accept or override via the Certification Dashboard. This ensures teacher accountability for certification decisions.

Student Performance (Student)

Read-Only Performance View

Students can view their own performance data and certification status. They cannot edit or challenge certification decisions (this must go through normal administrative channels).

Screens

ViewKey elementsMockup
Performance OverviewSingle-student view. Summary card: total points (192/240), percentage (80%), achievements completed (3/3). Certification status badge: "Passed ✓" (green) or "Failed ✗" (red) or "Pending ⏳" (yellow). Certification date and note from teacher (if manual override). Assignment breakdown table: columns (assignment title, points earned, max points, percentage). Achievements breakdown table: columns (achievement title, status). Link to detailed assignment results. No edit actions.Mockup

Flow

flowchart LR
  subgraph Student
    DASH[Dashboard] --> PERF[Performance Overview]
    PERF --> DETAILS[Assignment Details]
    PERF --> CHECK{Certification Status?}
    CHECK -->|Passed| REG[Can Register for Exam]
    CHECK -->|Failed/Pending| BLOCK[Cannot Register]
  end

Controller/action mapping

RoleControllerActionsScope/Notes
StudentStudentPerformance::RecordsControllershowView own performance data (points, achievements)
StudentStudentPerformance::CertificationsControllershowView own certification status (passed/failed/pending) and teacher note

Exams

Context

Exam registration and roster management. Certification (from Student Performance) gates exam registration. For exam grading workflows, see Exam Grading Workflow above.

Exams (Teacher/Editor)

Pre-flight Checks & Two-Modal Pattern

Campaign opening and finalization may require certification checks depending on policy phase:

Registration-phase policy (rare): All lecture students must have finalized certifications before campaign can be opened. Pre-flight Error Modal blocks opening and redirects to Certification Dashboard.

Finalization-phase policy (standard): Students can register freely. At finalization time, confirmed registrants with pending certifications trigger Remediation Modal for inline quick-resolution. Failed certifications result in auto-rejection.

Most exams use finalization-phase policies (students register freely, certification verified before roster materialization).

Screens

ViewKey elementsMockup
Exams IndexCompact table with exam name, date/time, location, registered count (clickable link to roster), certification summary (passed/pending/failed counts with clickable link to Certification Dashboard), CRUD action buttons (edit/delete), summary cards (total exams, registered students count, certification status breakdown)Mockup
Exam Roster (Post-Registration)Shows only registered students who will take the exam. Summary cards (registered count: 85/150, grading progress: 12/85, auto-rejected: 3). Warning alert for auto-rejected students with link to details. Filter by tutorial group or grading status. Table with columns for each exam task showing individual points and max points, total points, student name, matriculation, tutorial group, certification status badge with manual override icon if applicable, grade, actions (Enter/Edit). View grading scheme link (opens modal showing task structure and points-to-grade conversion table). Export participant list. Link to grading interface. Pagination. Info alert clarifying this shows exam participants only, not all lecture students.Mockup
Campaign Pre-Flight Error ModalTriggered when teacher tries to open campaign that has a registration-phase student_performance policy with incomplete certifications. Blocks campaign opening until all lecture students have finalized certifications. Red modal header. Danger alert: "24 students have incomplete student performance certifications" with explanation that all students need approved/rejected status before opening. Scrollable table (max-height 400px) showing affected students with columns (name, matriculation, performance points/percentage with icon indicating if meets threshold, status badge: Pending/Missing). Info note explaining threshold icons. Modal footer with Cancel button and "Go to Certification Dashboard" primary button linking to certification management where teacher can batch-resolve all pending certifications.Mockup

Flow

flowchart LR
  subgraph Exam_Setup
    IX[Exams Index] --> NEW[Create Exam]
    NEW --> CAMP[Campaign Settings]
    CAMP --> REG_POLICY_CHECK{Has Registration-Phase<br/>Policy?}
    REG_POLICY_CHECK -->|Yes| CERT_CHECK{All Certifications<br/>Complete?}
    CERT_CHECK -->|No| ERROR[Pre-Flight Error Modal]
    ERROR --> CERT_DASH[Certification Dashboard]
    CERT_DASH --> CAMP
    CERT_CHECK -->|Yes| OPEN[Open Campaign]
    REG_POLICY_CHECK -->|No| OPEN
  end

  subgraph Registration
    OPEN --> REG[Students Register]
    REG --> CLOSE[Close Campaign]
  end

  subgraph Finalization
    CLOSE --> FIN_POLICY_CHECK{Has Finalization-Phase<br/>Policy?}
    FIN_POLICY_CHECK -->|Yes| PENDING_CHECK{Pending Certs Among<br/>Registrants?}
    PENDING_CHECK -->|Yes| REM[Remediation Modal]
    REM --> RESOLVE[Quick-Resolve Pending]
    RESOLVE --> PENDING_CHECK
    PENDING_CHECK -->|No| AUTO_REJECT[Auto-Reject Failed Certs]
    FIN_POLICY_CHECK -->|No| MAT[Materialize Roster]
    AUTO_REJECT --> MAT
  end

  subgraph Post_Registration
    MAT --> ROSTER[Exam Roster]
    ROSTER --> GRADE[Enter Grades]
  end

  IX -->|Click Certification Summary| CERT_DASH
  IX -->|Click Registered Count| ROSTER

Controller/action mapping

RoleControllerActionsScope/Notes
Teacher/EditorExamsControllerindex, new, create, show, edit, update, destroyFull CRUD on exams
Teacher/EditorRegistration::CampaignsControlleropen, finalizePre-flight checks for certification completeness; auto-reject failed certifications
Teacher/EditorRoster::MaintenanceControllershow, update, exportView roster (registered students only), assign rooms, export
Teacher/EditorStudentPerformance::CertificationsControllerindexView certification status for campaign pre-flight checks

Exams (Tutor)

Tutors have read-only access if permitted by abilities.

Screens

ViewKey elementsMockup
Exams Index (Read-only)Read-only view of exams. Info alert: "You have read-only access as a tutor". Summary cards (total exams, registered students count, certification status breakdown). Table with columns: exam name/type, date/time, location, registered count (clickable link to roster), certification summary (passed/pending/failed counts), View Details button (read-only). No CRUD actions available.Mockup
Exam Roster (Read-only)Read-only view of registered students. Info alert: "You have read-only access - cannot modify grades". Summary cards (registered count: 85/150, grading progress: 12/85, auto-rejected: 3). Filter by tutorial group or grading status. Table with columns for each exam task showing individual points and max points, total points, student name, matriculation, tutorial group, certification status badge with manual override icon if applicable, grade. Export list button available. No edit/enter grade actions. Pagination. Info alert clarifying shows exam participants only.Mockup

Controller/action mapping

RoleControllerActionsScope/Notes
TutorExamsControllerindex, showRead-only (if permitted by abilities)
TutorRoster::MaintenanceControllershowView if permitted; no edit actions

Dashboards

Context

Dashboards serve as role-based landing pages providing quick access to actionable items, deadlines, and key information. For detailed architecture, see Student Dashboard and Teacher & Editor Dashboard.

Student Dashboard

The student dashboard is the primary landing page for logged-in students, replacing the simple lecture list with a unified view of tasks, deadlines, and course progress.

Screens

ViewKey elementsMockup
Student DashboardFour-widget layout: What's Next? (urgent items: open registrations with deadline badges, assignment deadlines with due dates, Register Now/View/Submit action buttons); My Courses (lecture cards with progress bars, certification status badges, quick links to announcements/submissions/forum); Recent Activity (new grades with scores, recent announcements, View Details buttons); My Tutoring Responsibilities (conditional widget for student tutors showing assigned tutorial groups with links to roster and grading). Responsive grid layout. Color-coded deadline badges (red: tomorrow, yellow: 2 days, blue: future).Mockup

Flow

flowchart LR
  subgraph Student_Dashboard
    DASH[Student Dashboard] --> REG[Register for Campaign]
    DASH --> SUBMIT[Submit Assignment]
    DASH --> RESULTS[View Grade]
    DASH --> ANNOUNCE[View Announcement]
    DASH --> COURSE[Go to Course]
    DASH --> TUTOR_ROSTER[View Tutorial Roster]
    DASH --> TUTOR_GRADE[Grade Submissions]
  end

Controller/action mapping

RoleControllerActionsScope/Notes
StudentDashboardsController (or StudentsController)showStudent dashboard with widgets for registrations, assignments, grades, announcements; conditional tutoring widget

Teacher & Editor Dashboard

The teacher/editor dashboard serves as the administrative mission control for staff managing one or more lectures.

Screens

Context-Aware Performance Button

The lecture admin card shows either "Points Overview" or "Certifications" depending on semester phase:

  • Start of semester / mid-semester: "Points Overview" links to Performance Records (read-only factual data for monitoring progress)
  • End of semester (active exam campaign with certification policy): "Certifications" links to Certification Dashboard (decision-making interface for issuing certifications)

The button displays a pending count badge when certifications require teacher review.

ViewKey elementsMockup
Teacher & Editor Dashboard (End of Semester)Four-widget layout: My Lectures (1 lecture + 2 seminars; action buttons for campaigns, rosters, assessments, certifications, forum, comments, announcements); Active Campaigns (1 exam registration campaign); My Tutoring Responsibilities (conditional widget). Lecture shows finalized rosters, 1 active exam campaign, "Certifications" button with "6 pending" badge, moderate forum/comment activity. Seminars show closed campaigns, enrolled students, pending talk grades.Mockup
Teacher & Editor Dashboard (Start of Semester)Four-widget layout: My Lectures (1 lecture + 2 seminars; same action buttons); Active Campaigns (tutorial registration + 2 seminar participant selections, all open); My Tutoring Responsibilities (conditional widget). Lecture shows draft rosters, active tutorial campaign, "Points Overview" button (no badge). Seminars show active campaigns, draft rosters, no talks graded yet, minimal forum/comment activity.Mockup

Flow

flowchart LR
  subgraph Teacher_Dashboard
    DASH[Teacher Dashboard] --> CAMPAIGNS[Manage Campaigns]
    DASH --> ROSTERS[Manage Rosters]
    DASH --> GRADEBOOK[Gradebook]
    DASH --> CERT[Student Performance]
    DASH --> ANNOUNCE[Announcements]
    DASH --> GRADE[Start Grading]
    DASH --> ALLOC[Allocate & Finalize]
    DASH --> TUTOR_ROSTER[View Tutorial Roster]
    DASH --> TUTOR_GRADE[Grade Submissions]
  end

Controller/action mapping

RoleControllerActionsScope/Notes
Teacher/EditorDashboardsController (or TeachersController)showTeacher dashboard with widgets for lectures, campaigns, grading queue; conditional tutoring widget

Mockups — index and conventions

Info

Purpose: central place for mockup conventions and a curated index of the key HTML mockups used across Registration and Campaigns flows.

Conventions

  • Format: plain HTML with Bootstrap 5 and Bootstrap Icons.
  • JS: minimal inline JS is allowed; use localStorage to simulate state.
  • Styling: prefer Bootstrap utilities; avoid custom CSS unless needed.
  • Scope: illustrate UI/UX and states; not production templates.
  • Accessibility: semantic tags where reasonable; label interactive elements.

Tip

See also: View architecture → Mockups section for legend and notes: 12-views.md

Authoring checklist

  • File location: architecture/src/mockups/.
  • Naming: describe screen + variant, e.g. student_registration_fcfs.html.
  • Include: viewport meta, Bootstrap CSS/JS, Bootstrap Icons.
  • Use: consistent headings, button labels, and table markup.
  • Simulate: disabled/enabled states; empty/error states as needed.

Index — Registration and Campaigns

Abstract

Campaigns (admin)

Abstract

Student Registration (student)

Abstract

Roster maintenance (teacher/editor)

Abstract

Exams & Eligibility (teacher/editor)

Abstract

Assessments (teacher/editor/tutor/student)

Abstract

Student Performance (teacher/editor/student)

Abstract

Dashboards

Abstract

Student Account

Change policy

  • Keep mockups small and focused on one screen/variant.
  • When adding mockups, link them from the relevant feature docs and add them here.

Multiple Choice Exams

Extension Feature

This chapter documents an optional extension for exams that include multiple choice components requiring special legal compliance. This feature should be implemented after the core assessment and exam infrastructure is complete.

What are MC Exams?

MC (Multiple Choice) exams are exams that include a multiple choice section which must be graded according to legally mandated schemes, separate from the written part.

  • Common Example: "Final Exam with 30% MC part, 70% written part"
  • Legal Context: German examination law (Prüfungsordnung) requires specific grading rules for MC components
  • In this context: An extension to the base Exam model that adds MC-specific grading automation

Problem Overview

When an exam has both MC and written parts:

  • MC part must use a fixed legal grading scheme (defined by law)
  • Written part uses standard exam grading (curve-based or absolute)
  • Final grade is weighted mean of both parts
  • Students must meet a minimum MC threshold to pass
  • Threshold can be lowered by a "sliding clause" (Gleitklausel) based on average performance
  • Without automation, staff must manually compute this for hundreds of students

Solution Architecture

We extend the base Exam model with:

  • MC Configuration: has_multiple_choice flag and mc_weight percentage
  • Task-Level Scheme: Assessment::Task gets is_multiple_choice flag and grade_scheme_id
  • Legal Grader Service: Assessment::McGrader implements the two-stage grading process
  • Automatic Adjustment: Service computes threshold, checks eligibility, adjusts grade bands, computes final grades

Exam Model Extension

Additional Fields

FieldTypeDescription
has_multiple_choiceBooleanWhether this exam includes an MC part
mc_weightDecimalWeight of MC part in final grade (e.g., 0.3 = 30%)

Example Configuration

exam = Exam.create!(
  lecture: lecture,
  title: "Final Exam",
  date: Date.new(2025, 2, 15),
  has_multiple_choice: true,
  mc_weight: 0.3  # MC part counts for 30% of final grade
)

Assessment::Task Extensions

MC-Specific Fields

FieldTypeDescription
is_multiple_choiceBooleanMarks this task as the MC part
grade_scheme_idFK (optional)Links to a grade scheme specifically for this task

Validations

module Assessment
  class Task < ApplicationRecord
    validates :is_multiple_choice, inclusion: { in: [true, false] }
    validate :mc_flag_only_for_exams
    validate :at_most_one_mc_task_per_assessment
    validate :grade_scheme_only_for_mc_tasks

    scope :multiple_choice, -> { where(is_multiple_choice: true) }
    scope :regular, -> { where(is_multiple_choice: false) }

    private

    def mc_flag_only_for_exams
      return unless is_multiple_choice?
      return if assessment.assessable.is_a?(Exam)

      errors.add(:is_multiple_choice,
                 "can only be set for exam assessments")
    end

    def at_most_one_mc_task_per_assessment
      return unless is_multiple_choice?
      return unless assessment

      other_mc_tasks = assessment.tasks.multiple_choice
                                 .where.not(id: id)

      if other_mc_tasks.exists?
        errors.add(:is_multiple_choice,
                   "only one MC task allowed per assessment")
      end
    end

    def grade_scheme_only_for_mc_tasks
      return unless grade_scheme_id.present?
      return if is_multiple_choice?

      errors.add(:grade_scheme,
                 "can only be assigned to multiple choice tasks")
    end
  end
end

Why Task-Level Grade Scheme?

MC tasks need their own grade scheme because:

  • The MC part may have different max points than the written part
  • Staff may want a relative (curve-based) scheme for MC but absolute for written, or vice versa
  • Each exam can configure this independently

The validation ensures only MC tasks can have their own scheme—regular tasks use the assessment-level scheme.


German Law (Prüfungsordnung)

The law requires a minimum passing threshold for the MC part:

  • Default threshold: 60% of MC points
  • Sliding clause (Gleitklausel): If (mean - 20%) < 60%, threshold becomes (mean - 20%)
  • Floor: Threshold cannot go below 50% of max MC points
  • Students who fail to meet the MC threshold fail the entire exam (grade 5.0)

Threshold Computation Examples

Example 1: Average MC score is 70%

  • Sliding threshold: 70% - 20% = 50%
  • Since 50% < 60%, threshold becomes 50%
  • Students need at least 50% on MC to pass

Example 2: Average MC score is 85%

  • Sliding threshold: 85% - 20% = 65%
  • Since 65% > 60%, threshold stays at 60%
  • Students need at least 60% on MC to pass

Example 3: Average MC score is 65%

  • Sliding threshold: 65% - 20% = 45%
  • Since 45% < 50% (floor), threshold becomes 50%
  • Students need at least 50% on MC to pass

Grade Scheme Adjustment

Dynamic Grade Bands

When the Gleitklausel lowers the threshold below 60%, the MC task's grade scheme must be dynamically adjusted:

Without Gleitklausel (threshold = 60%):

  • Original scheme: 60% → 4.0, 70% → 3.0, 80% → 2.0, 90% → 1.0

With Gleitklausel (threshold = 50%):

  • Adjusted scheme: 50% → 4.0, 60% → 3.0, 70% → 2.0, 80% → 1.0
  • All grade boundaries shift by the same amount (10 percentage points in this example)

The McGrader service handles this adjustment automatically.


Assessment::McGrader (Service)

What it represents

A service that implements the legally mandated two-stage grading process for exams with MC components.

Public Interface

MethodDescription
apply_legal_scheme!Computes threshold, checks eligibility, computes grades for all students

Two-Stage Process

Stage 1: MC Threshold Check

  1. Compute average MC score across all students
  2. Apply sliding clause: threshold = max(min(mean - 20%, 60%), 50%)
  3. Check each student's MC percentage against threshold
  4. Students below threshold fail immediately (grade 5.0)

Stage 2: Grade Computation (for passing students)

  1. Adjust MC grade scheme if threshold < 60%
  2. Compute MC grade using adjusted scheme
  3. Compute written grade using standard scheme
  4. Compute final grade: final = mc_weight × mc_grade + (1 - mc_weight) × written_grade

Implementation

module Assessment
  class McGrader
    def initialize(assessment)
      @assessment = assessment
      @exam = assessment.assessable
    end

    def apply_legal_scheme!
      return unless @exam.is_a?(Exam) && @exam.has_multiple_choice?

      mc_task = @assessment.tasks.multiple_choice.first
      regular_tasks = @assessment.tasks.regular

      mc_threshold = compute_mc_threshold(mc_task)

      @assessment.participations.find_each do |participation|
        mc_points = participation.task_points
                                 .find_by(task: mc_task)&.points || 0
        mc_percentage = mc_points.to_f / mc_task.max_points

        if mc_percentage < mc_threshold
          participation.update!(
            grade_value: 5.0,
            passed: false,
            failure_reason: "MC threshold not met (#{(mc_threshold * 100).round}%)"
          )
          next
        end

        mc_grade = compute_mc_grade(mc_points, mc_task, mc_threshold)
        regular_grade = compute_regular_grade(participation, regular_tasks)

        final_grade = (@exam.mc_weight * mc_grade) +
                      ((1 - @exam.mc_weight) * regular_grade)

        participation.update!(grade_value: final_grade, passed: final_grade <= 4.0)
      end
    end

    private

    def compute_mc_threshold(mc_task)
      all_mc_points = @assessment.participations
                                  .joins(:task_points)
                                  .where(task_points: { task: mc_task })
                                  .pluck("task_points.points")

      return 0.60 if all_mc_points.empty?

      mean_points = all_mc_points.sum.to_f / all_mc_points.size
      mean_percentage = mean_points / mc_task.max_points

      default_threshold = 0.60
      sliding_threshold = mean_percentage - 0.20
      floor_threshold = 0.50

      if sliding_threshold < default_threshold
        [sliding_threshold, floor_threshold].max
      else
        default_threshold
      end
    end

    def compute_mc_grade(mc_points, mc_task, threshold_percentage)
      raise "MC task must have a grade scheme" unless mc_task.grade_scheme

      adjusted_scheme = adjust_scheme_for_threshold(
        mc_task.grade_scheme,
        threshold_percentage
      )

      GradeScheme::Applier.compute_grade(
        mc_points,
        mc_task.max_points,
        adjusted_scheme
      )
    end

    def adjust_scheme_for_threshold(original_scheme, threshold)
      default_threshold = 0.60

      return original_scheme if threshold == default_threshold

      shift = threshold - default_threshold

      original_scheme.bands.map do |band|
        {
          min_percentage: [band[:min_percentage] + shift, 0.0].max,
          grade: band[:grade]
        }
      end
    end

    def compute_regular_grade(participation, regular_tasks)
      total = participation.task_points
                          .where(task: regular_tasks)
                          .sum(:points)
      max = regular_tasks.sum(:max_points)

      GradeScheme::Applier.compute_grade(
        total,
        max,
        @assessment.grade_scheme
      )
    end
  end
end

Grading Workflow for MC Exams

StepActionTechnical Details
1Create assessmentStaff creates Assessment::Assessment for the exam
2Create tasksStaff creates tasks, marking one with is_multiple_choice: true
3Configure MC schemeStaff creates and assigns a GradeScheme::Scheme to the MC task
4Grade all tasksTutors grade MC questions and written problems normally
5Apply MC graderStaff calls Assessment::McGrader.new(assessment).apply_legal_scheme!
6ResultsService computes threshold, checks eligibility, adjusts scheme, computes final grades

MC Task Grade Scheme

The MC task has its own grade scheme (can be relative/curve-based or absolute). Only MC tasks can have task-level schemes—this is enforced by validation. Regular tasks use the assessment-level scheme.


Database Schema Extensions

Exams Table Migration

# filepath: db/migrate/20250101000000_add_multiple_choice_to_exams.rb
class AddMultipleChoiceToExams < ActiveRecord::Migration[7.0]
  def change
    add_column :exams, :has_multiple_choice, :boolean, default: false, null: false
    add_column :exams, :mc_weight, :decimal, precision: 5, scale: 2

    reversible do |dir|
      dir.up do
        execute <<-SQL
          ALTER TABLE exams
          ADD CONSTRAINT exams_mc_weight_when_mc
          CHECK (NOT has_multiple_choice OR mc_weight IS NOT NULL);
        SQL
      end

      dir.down do
        execute "ALTER TABLE exams DROP CONSTRAINT IF EXISTS exams_mc_weight_when_mc;"
      end
    end
  end
end

Assessment Tasks Table Migration

# filepath: db/migrate/20250102000000_add_multiple_choice_to_tasks.rb
class AddMultipleChoiceToTasks < ActiveRecord::Migration[7.0]
  def change
    add_column :assessment_tasks, :is_multiple_choice, :boolean,
               default: false, null: false
    add_reference :assessment_tasks, :grade_scheme, foreign_key: true,
                  index: true
  end
end

Assessment Participations Table Migration

# filepath: db/migrate/20250103000000_add_failure_reason_to_participations.rb
class AddFailureReasonToParticipations < ActiveRecord::Migration[7.0]
  def change
    add_column :assessment_participations, :failure_reason, :text
  end
end

Application-Level Constraints

The "at most one MC task per assessment" constraint is enforced via ActiveRecord validation, not database constraint, to provide better error messages.


Usage Scenarios

Creating an MC Exam

exam = Exam.create!(
  lecture: lecture,
  title: "Final Exam",
  date: Date.new(2025, 2, 15),
  has_multiple_choice: true,
  mc_weight: 0.3
)

assessment = exam.assessment
assessment.tasks.create!(title: "MC Questions", max_points: 30, is_multiple_choice: true)
assessment.tasks.create!(title: "Problem 1", max_points: 20)
assessment.tasks.create!(title: "Problem 2", max_points: 25)
assessment.tasks.create!(title: "Problem 3", max_points: 25)

mc_task = assessment.tasks.multiple_choice.first
mc_scheme = GradeScheme::Scheme.create!(
  title: "MC Legal Scheme",
  kind: :absolute,
  bands: [
    { min_percentage: 0.90, grade: 1.0 },
    { min_percentage: 0.80, grade: 2.0 },
    { min_percentage: 0.70, grade: 3.0 },
    { min_percentage: 0.60, grade: 4.0 }
  ]
)
mc_task.update!(grade_scheme: mc_scheme)

Grading and Computing Final Grades

Assessment::McGrader.new(assessment).apply_legal_scheme!

This single call:

  • Computes the MC threshold with sliding clause
  • Fails students below the threshold
  • Adjusts the MC grade scheme if needed
  • Computes MC and written grades
  • Computes weighted final grades

Integrity Constraints

-- At most one MC task per assessment (enforced in application layer)
-- See Assessment::Task#at_most_one_mc_task_per_assessment validation

-- Grade scheme only for MC tasks
-- See Assessment::Task#grade_scheme_only_for_mc_tasks validation

-- MC flag only for exam assessments
-- See Assessment::Task#mc_flag_only_for_exams validation

Open Questions

Rounding Strategy

The weighted mean of two grades (e.g., 0.3 × 1.3 + 0.7 × 2.0 = 1.79) is not necessarily a well-defined grade in the German system. We need to decide on a rounding strategy:

  • Round to nearest 0.3 step (1.0, 1.3, 1.7, 2.0, 2.3, 2.7, 3.0, ...)?
  • Round down/up consistently?
  • Allow arbitrary decimal grades?

This detail should be clarified before implementation.


Proposed File Structure

app/
├── models/
│   ├── assessment/
│   │   └── task.rb (extended with MC fields and validations)
│   └── exam.rb (extended with has_multiple_choice and mc_weight)
└── services/
    └── assessment/
        └── mc_grader.rb (new service)

State Diagram

stateDiagram-v2
    [*] --> TasksCreated
    TasksCreated --> AllGraded : Tutors grade all tasks
    AllGraded --> ThresholdComputed : McGrader computes threshold
    ThresholdComputed --> StudentsChecked : Check each student MC percentage
    StudentsChecked --> GradesComputed : For passing students compute final grades
    GradesComputed --> [*]

    note right of ThresholdComputed
        Apply sliding clause
        threshold = max(min(mean - 20pct, 60pct), 50pct)
    end note

    note right of StudentsChecked
        Students below MC threshold
        fail immediately (grade 5.0)
    end note

    note right of GradesComputed
        Adjust MC scheme if needed
        compute weighted mean
    end note

Future Extensions & Roadmap

Collection of potential enhancements and ideas for future development.

Implementation Status

The core architecture documented in Chapters 1-9 represents the planned baseline. This chapter lists potential future enhancements.


1. Allocation Algorithm

  • CP-SAT strategy (fairness tiers, exclusions, group pairing)
  • Soft penalties (time-of-day preferences, instructor load balancing)
  • Diversity/quota constraints (track distribution, campus location)
  • Multi-round capacity release (phased seat allocation)
  • Waitlist modeling (flow network with priority costs)
  • Multi-campaign global optimization (joint tutorial + lab balancing)
  • Solver audit trail (persist inputs/outputs as JSON for debugging)
  • Alternative algorithm comparison (min-cost flow vs. CP-SAT benchmarks)

2. Registration & Policy System

Scheduled Campaign Opening

Current State: Campaigns require manual teacher action to transition draft → open.

Proposed Enhancement: Automatic opening via background job.

Implementation:

add_column :registration_campaigns, :registration_start, :datetime

# Validation
validates :registration_start, presence: true
validate :start_before_deadline

# Background job (every 5 minutes)
Registration::CampaignOpenerJob.perform_async
  Registration::Campaign.where(status: :draft)
    .where("registration_start <= ?", Time.current)
    .find_each(&:open!)
end

Benefits:

  • Symmetry: auto-open + auto-close provides full automation
  • Teacher workflow: set up campaign in advance, forget about it
  • Reduces manual intervention during high-traffic registration windows

Trade-offs:

  • Adds complexity (another background job, another timestamp)
  • Teachers lose last-minute verification opportunity before going live
  • Current manual flow forces review before opening

Recommendation: Defer to post-MVP. Current workaround (manual open) is acceptable. Implement if teachers report frequent "forgot to open" incidents during beta testing.

Complexity: Low (additive change, no schema conflicts)

References: See Registration - Campaign Lifecycle


Full Trace for Policy Evaluation

Context: The current policy engine stops at the first failure (eligible? returns false immediately).

Proposed Enhancement: Implement a full_trace method that evaluates all policies and returns all failures.

Reference: Original Implementation Draft

Reasoning: For a student, it might be beneficial to see all reasons for ineligibility at once. Currently, if they fix one violation (e.g., "Policy X violated"), they might immediately encounter the next one ("Policy Y violated"). A full trace allows them to resolve all issues in parallel.


Other Registration Extensions

Scheduled Campaign Opening

Current State: Campaigns require manual teacher action to transition draft → open.

Proposed Enhancement: Automatic opening via background job.

Implementation:

add_column :registration_campaigns, :registration_start, :datetime

# Validation
validates :registration_start, presence: true
validate :start_before_deadline

# Background job (every 5 minutes)
Registration::CampaignOpenerJob.perform_async
  Registration::Campaign.where(status: :draft)
    .where("registration_start <= ?", Time.current)
    .find_each(&:open!)
end

Benefits:

  • Symmetry: auto-open + auto-close provides full automation
  • Teacher workflow: set up campaign in advance, forget about it
  • Reduces manual intervention during high-traffic registration windows

Trade-offs:

  • Adds complexity (another background job, another timestamp)
  • Teachers lose last-minute verification opportunity before going live
  • Current manual flow forces review before opening

Recommendation: Defer to post-MVP. Current workaround (manual open) is acceptable. Implement if teachers report frequent "forgot to open" incidents during beta testing.

Complexity: Low (additive change, no schema conflicts)

References: See Registration - Campaign Lifecycle


  • Roster membership policy (on_roster): Restrict registration to users on a specific lecture/course roster. Alternative to campaign chaining via prerequisite_campaign policy (e.g., seminar talk registration restricted to students on seminar enrollment roster). Config: { "roster_source_id": <lecture_id> }. Check: source.roster.include?(user).
  • Item-level capacity: Add capacity column to registration_items to enable capacity partitioning across campaigns (e.g., same tutorial in two campaigns with split capacity: 20 seats for CS students, 10 for Physics). Items have independent capacity from domain objects. Soft validation warns if sum(items.capacity) > tutorial.capacity.
  • Policy trace persistence (store evaluation results for audit)
  • User-facing explanations (API endpoint showing why ineligible)
  • Rate limiting for FCFS hotspots
  • Bulk eligibility preview (matrix: users × policies)
  • Policy simulation mode (test changes without affecting real data)
  • Automated certification proposals (ML-based predictions from partial semester data)
  • Certification templates (pre-fill common override scenarios)
  • Certification bulk operations (approve/reject multiple students at once)

3. Roster Management

  • Batch operations (CSV import/export)
  • Capacity forecasting and rebalancing suggestions
  • Automatic load balancing (heuristic-based)
  • Enhanced change history UI

4. Assessment & Grading

Submission Support for Exams and Talks

Current Status

Currently, file submissions are only implemented for Assignment types. The underlying data model (Submission with assessment_id field) was designed to support submissions for all assessment types, but the UI and workflows are scoped to assignments only.

Use Cases for Future Extension:

Assessment TypeSubmission ScenarioExample
Exam (Online)Students upload completed exam PDFsTake-home exam, timed online exam
Exam (In-Person)Staff upload scanned answer sheetsPhysical exam digitized for archival/grading
TalkSpeakers upload presentation materialsSlides, handouts, supplementary files

Infrastructure Ready:

  • Submission model uses assessment_id (supports any assessment type)
  • Assessment::Assessment has requires_submission boolean field
  • Assessment::Participation tracks submitted_at timestamp
  • Assessment::TaskPoint can link to submission_id for audit trails

Requirements for Implementation:

  • Design submission UI adapted for exam/talk contexts (different from assignment task-based interface)
  • Adapt grading workflows (exam submissions may need different grading patterns than assignment tasks)
  • Consider timing constraints (exam time windows, talk presentation schedules)
  • Define file type restrictions (exam PDFs vs presentation formats)
  • Handle team vs individual submissions (talks may have co-presenters)

Complexity: Medium (model foundation exists, need UI and workflow design)

References: See Assessments & Grading - Submission Model


Task-Wise Grading (Optional Workflow)

Current Status

The default grading workflow is tutorial-wise: each tutor grades all tasks for their own tutorial's submissions. The data model already supports an alternative workflow where grading is distributed by task instead of by tutorial, but this requires additional UI and configuration features.

Use Case:

By default, tutors grade all tasks for their own tutorial's submissions. An alternative workflow is task-wise grading, where each tutor specializes in grading a specific task across all tutorials.

Traditional (Tutorial-Wise)Task-Wise Alternative
Tutorial A tutor: grades Tasks 1-3 for 30 studentsTutor 1: grades Task 1 for all 60 students
Tutorial B tutor: grades Tasks 1-3 for 30 studentsTutor 2: grades Task 2 for all 60 students
Each tutor: 90 gradings (30 × 3)Tutor 3: grades Task 3 for all 60 students
Each tutor: 60 gradings (specialization)

Benefits:

  • Consistency: Same tutor grades same problem for everyone (reduces grading variance)
  • Efficiency: Tutor becomes expert in one problem, grades faster with practice
  • Fairness: Eliminates "tough tutor vs. lenient tutor" differences per task
  • Specialization: Complex problems assigned to most experienced tutor

Infrastructure Already in Place:

  • Assessment::TaskPoint has grader_id (can be any tutor)
  • Submission has tutorial_id for context but grading isn't restricted by it
  • Assessment::SubmissionGrader accepts any grader: parameter

Requirements for Implementation:

  1. Data Model Addition:

    • New model: Assessment::TaskAssignment linking task_idtutor_id
    • New enum on Assessment::Assessment: grading_mode (:tutorial_wise default, :task_wise)
    • Migration for assessment_task_assignments table
  2. Teacher Interface:

    • Assessment show page: grading mode selector
    • When task-wise selected: UI to assign each task to a tutor
    • Progress dashboard showing per-task completion across all tutorials
  3. Modified Tutor Grading Interface:

    • Filter submissions by assigned tasks (not just by tutorial)
    • Show all tutorials' submissions for assigned tasks
    • Progress: "45/89 students graded for Task 1"
    • Maintain existing grading UI, just change data scope
  4. Controller Logic:

    if @assessment.task_wise?
      @tasks = @assessment.tasks
        .joins(:task_assignments)
        .where(assessment_task_assignments: { tutor_id: current_user.id })
      @submissions = @assessment.submissions  # all tutorials
    else
      @tasks = @assessment.tasks  # all tasks
      @submissions = @tutorial.submissions  # current tutorial only
    end
    
  5. Publication Control:

    • Recommend teacher-level publication when all tasks complete
    • Per-tutorial publication doesn't make sense in task-wise mode
    • Could offer per-task publication as alternative

Edge Cases to Handle:

  • Reassignment mid-grading: keep existing grader_id on TaskPoints (historical record)
  • Cross-tutorial teams: team submission appears once, graded by task-assigned tutor
  • Mixed mode: initially all-or-nothing (can't mix modes per task)

Complexity: Medium (model support exists, need UI and workflow adaptation)

References: See Assessments & Grading - TaskPoint Model for grader_id field


Other Assessment Extensions

  • Inline annotation integration (external service)
  • Rubric templates per task (structured criteria + auto-sum)
  • Late policy engine (configurable penalty computation)
  • Task dependencies (unlock logic)
  • Peer review workflows

Grading Audit Trail (Teacher Override Tracking)

Use Case: Track when teachers modify points after initial grading (e.g., complaint handling).

Current State:

  • Assessment::TaskPoint has grader_id and graded_at
  • No explicit tracking of modifications after initial grading
  • Cannot distinguish "teacher graded initially" from "teacher overrode tutor grade"

Implementation:

Add modification tracking fields:

add_column :assessment_task_points, :modified_by_id, :integer
add_column :assessment_task_points, :modified_at, :datetime
add_index :assessment_task_points, :modified_by_id

Logic:

  • Initially: grader_id = tutor, modified_by_id = nil
  • Teacher edits: modified_by_id = teacher, modified_at = Time.current
  • Keep original grader_id for audit trail

Benefits:

  • Explicit tracking of override events
  • Preserves original grader context
  • Enables audit reports ("all teacher overrides for this assessment")
  • Simple to query and display in UI

UI Indicators:

  • Warning icon on modified cells
  • Tooltip: "Modified by [Teacher Name] on [Date]"
  • Teacher grading details view shows "Last Changed" column

Multiple Choice Extensions

  • MC question bank (reusable question library)
  • Randomized exams (per-student variants)
  • Statistical analysis (item difficulty, discrimination indices)

5. Student Performance & Certification

Recently Implemented

The core certification workflow (teacher-approved eligibility decisions, Evaluator proposals, pre-flight checks) is now part of the baseline architecture documented in Chapter 5.

Future Extensions:

  • Multiple concurrent certification policies (AND/OR logic expression builder)
  • Incremental recompute (listen to grade changes, auto-update stale certifications)
  • Student-facing certification preview (before registration opens, show provisional status)
  • Custom formula DSL (complex eligibility calculations beyond simple point thresholds)
  • Certification history (track changes over time, audit teacher decisions)
  • Automated ML proposals (predict eligibility from partial semester data)
  • Bulk certification UI (approve/reject multiple students with filters)
  • Certification analytics (pass rate trends, override frequency analysis)

6. Grade Schemes

  • Percentile buckets (automatic equal-size grouping)
  • Curve normalization (mean target, standard deviation scaling)
  • Piecewise linear editor with live histogram preview
  • Custom function DSL (arbitrary grade computations)
  • Course-level aggregation (weighted composition across assessments)
  • Pass/fail rules (configurable requirements)
  • Bonus points system (extra credit with caps)

7. Analytics & Reporting

  • Student grade projection ("what if" calculator)
  • Progress tracking dashboard
  • Historical trend comparison
  • Allocation satisfaction metrics (average preference rank achieved)
  • Grade distribution analysis (variance heatmaps, outliers)
  • Capacity utilization tracking
  • Tutor workload reports
  • CSV export with snapshot versioning
  • JSON API (read-only endpoints)
  • Materialized views for performance

8. Operational Tools

  • Automatic integrity auditor (scheduled job checking invariants)
  • Integrity dashboard (real-time constraint violations)
  • Performance metrics (query times, job durations, failure rates)
  • mdBook link checker CI integration
  • Chaos testing (inject perturbations in test environment)
  • Solver visualizer (export flow network to DOT/Mermaid)
  • Benchmark harness (compare algorithm performance)

9. Performance & Scalability

  • Incremental solver updates (delta changes for preference edits)
  • Eligibility caching (memoize with versioned keys)
  • Points total caching (invalidate per TaskPoint write)
  • Database sharding strategy

10. API & Extensibility

  • GraphQL endpoint (read-only access to allocations/grades)
  • REST API (standard CRUD for integrations)
  • Webhooks (events: finalize, grade published, eligibility change)
  • Internal event bus (decouple reactions)
  • Plugin system (custom policy types, grade schemes)

11. Security & Compliance

  • Policy audit trail (tamper-evident logs)
  • PII minimization (anonymize exports, configurable retention)
  • GDPR compliance (data export, deletion, consent management)

12. Developer Experience

  • Reference seed script (generate realistic test data)
  • Scenario generator (complex allocation/grading scenarios)
  • Solver visualizer (export flow network to DOT/Mermaid)
  • Benchmark harness (compare algorithm performance)
  • Documentation sync (CI check for broken mdbook links)

13. UI/UX

  • Real-time capacity counters (WebSocket updates)
  • Drag-drop preference ordering with validation
  • Grade histogram overlay (scheme preview)

14. Migration & Cleanup

  • Dual-write (new + legacy systems)
  • Backfill historical data
  • Read switch with parity monitoring
  • Remove deprecated code/columns
  • Legacy eligibility flags cleanup
  • Manual roster seeding code removal
  • Obsolete submission routing cleanup

15. Research Opportunities

  • Fairness metrics (study allocation algorithm properties)
  • Optimal grading curves (per-subject analysis)
  • Predictive modeling (early intervention for at-risk students)
  • Learning analytics (engagement vs. outcomes correlation)

16. Full Trace for Policy Evaluation

(Moved to Section 2: Registration & Policy System)

Teacher & Editor Dashboard

Problem Overview

Teachers and lecture editors currently lack a centralized administrative hub. Managing course registrations, viewing allocation progress, overseeing grading, and administering rosters requires navigating to disparate sections of the application for each lecture. This becomes inefficient, especially for staff managing multiple courses.

Solution Architecture

We will introduce a new, role-based dashboard for users with teacher or editor permissions on one or more lectures. This dashboard will serve as the primary administrative entry point, providing a high-level overview of all assigned lectures and direct links to key management tasks like registration setup, roster administration, and grading.


1) Dashboard Controller & View

What it represents

  • A new Rails controller and view, likely under an administrative namespace (e.g., admin/dashboard_controller.rb), that is the landing page for users with teaching roles.

Think of it as

  • The mission control center for lecture administration.
# filepath: app/controllers/admin/dashboard_controller.rb
class Admin::DashboardController < ApplicationController
  before_action :authenticate_user!
  # Add authorization check for teacher/editor roles

  def show
    @my_lectures = find_administrated_lectures(current_user)
    @active_campaigns = find_active_campaigns(@my_lectures)
    @grading_queue = find_grading_queue(@my_lectures)

    # For the case where a teacher is also a tutor
    @tutored_groups = find_tutored_groups(current_user)
  end

  private

  # ... helper methods to query the respective models ...
end

2) "My Lectures" Widget (Admin Cockpit)

What it represents

  • The primary widget, showing a card for each lecture the user administers.

Content per card:

  • Lecture Title and Term.
  • Key administrative actions:
    • "Manage Registrations": Links to the Registration::Campaign teacher/editor UI for that lecture. Shows status (Draft, Open, Closed, Finalized).
    • "Manage Rosters": Links to the Roster::MaintenanceService UI for managing tutorial/exam rosters post-allocation.
    • "Gradebook": Links to the new grading UI for the lecture's assessments.
    • "Announcements": Links to create/edit announcements for the lecture.

3) "Active Campaigns" Widget (Registration Overview)

What it represents

  • An aggregated view of all active or recently closed registration campaigns across the user's lectures.

Content:

  • A list of campaigns showing:
    • Campaign Title & Lecture.
    • Status: e.g., "Open - 150/200 registered", "Awaiting Allocation", "Finalized".
    • Action: A "View Details" button linking to the campaign's admin page.

API at a glance (Teacher)

  • campaign.evaluate_policies_for(user) → Result (pass, failed_policy, trace)
  • campaign.policies_satisfied?(user) → Boolean
  • campaign.open_for_registrations? → Boolean
  • campaign.allocate_and_finalize! → Execute solver and finalize (preference-based)
  • campaign.finalize! → Materialize confirmed results to rosters
  • campaign.registration_policies / .registration_items / .user_registrations

4) "Grading Queue" Widget

What it represents

  • An actionable list of assessments that have submissions awaiting grading.

Content:

  • A list of assessments showing:
    • Assessment Title & Lecture.
    • Status: e.g., "35 new submissions to grade".
    • Action: A "Start Grading" button linking directly to the grading UI, filtered for that assessment.

5) "My Tutoring Responsibilities" Widget

What it represents

  • A dedicated section for users who are also tutors for specific groups. This covers the case where teachers or editors are directly responsible for a tutorial.

Content:

  • A list of their assigned tutorial groups (e.g., "Advanced Programming - Tutorial Group 5").
  • For each group, provides direct links to:
    • "View Roster": See the list of students in their group.
    • "Grade Submissions": A direct link to the grading UI, pre-filtered for their group's submissions.

6) UI Mockup (Placeholder)

graph TD
    subgraph "Teacher & Editor Dashboard"
        direction TB

        subgraph "My Lectures (Admin Cockpit)"
            A["Lecture: Advanced Programming<br/>(Links: Registrations, Rosters, Gradebook)"]
            B["Lecture: Linear Algebra<br/>(Links: Registrations, Rosters, Gradebook)"]
        end

        subgraph "Active Campaigns"
             C["AP Reg: 150/200 registered"]
             D["LA Reg: Awaiting Allocation"]
        end

        subgraph "Grading Queue"
            E["AP HW3: 35 new submissions"]
        end

        subgraph "My Tutoring Responsibilities"
            F["Tutorial: Adv. Programming G5<br/>(Links: Roster, Grade)"]
        end
    end

Student Dashboard

Problem Overview

The current landing page for students is a simple list of their subscribed lectures. It lacks a centralized, actionable view of a student's immediate tasks, deadlines, and overall status across all their courses. Students must navigate to separate pages to find open registrations, check assignment due dates, or see recent announcements, leading to a disconnected experience.

Solution Architecture

We will introduce a new, unified dashboard as the primary landing page for all logged-in students, replacing the existing main/start view. The dashboard will be composed of dynamic "widgets" or "cards" that surface the most relevant and time-sensitive information from both existing and new systems. It will serve as a read-only presentation layer, providing clear, actionable links to the relevant parts of the application.

The development will be phased to provide immediate value while accommodating the parallel implementation of the new registration and grading systems:

  • Phase A (Immediate Value): The initial dashboard will be built using existing data models (Assignment, Announcement, user's lecture subscriptions). This provides an improved user experience for the current semester.
  • Phase B (Integration): As new systems come online, their corresponding widgets ("Open Registrations", "Recent Grades", "Tutoring") will be activated on the dashboard, ready for the next semester.

1) Dashboard Controller & View

What it represents

  • The new Rails controller and view that will render the dashboard. It will be responsible for fetching and organizing all the necessary data for the current user.

Think of it as

  • The central hub for the student experience.
# filepath: app/controllers/dashboard_controller.rb
class DashboardController < ApplicationController
  before_action :authenticate_user!

  def show
    # Phase A: Data from existing models
    @my_lectures = current_user.current_subscribed_lectures
    @upcoming_assignments = find_upcoming_assignments(@my_lectures)
    @recent_announcements = find_recent_announcements(@my_lectures)

    # Phase B: Data from new models (conditionally enabled)
    if feature_enabled?(:roster_system)
      @tutored_groups = find_tutored_groups(current_user)
    end
    if feature_enabled?(:registration_system)
      @open_campaigns = find_open_campaigns(current_user)
    end
    if feature_enabled?(:grading_system)
      @recent_grades = find_recent_grades(current_user)
    end
  end

  private

  # ... helper methods to query the respective models ...
end

2) "What's Next?" Widget (Actionable Deadlines)

What it represents

  • A timeline or sorted list of the most urgent, time-sensitive items for the student. This is the most critical component of the dashboard.

Content:

  • Open Registrations: Shows active Registration::Campaign records the user is eligible for.
    • Displays: Campaign Title, Lecture, and "Closes on: [deadline]".
    • Action: A "Register Now" button linking to the campaign page.
  • Assignment Deadlines: Shows upcoming Assignment records from the user's subscribed lectures.
    • Displays: Assignment Title, Lecture, and "Due: [deadline]".
    • Action: A "View/Submit" button linking to the submission page.

API at a glance (Student registration)

  • campaign.open_for_registrations? → Boolean
  • campaign.policies_satisfied?(current_user) → Boolean
  • campaign.evaluate_policies_for(current_user) → Result (when you need reasons)
  • current_user.user_registrations.where(registration_campaign: campaign)

3) "My Courses" Widget (Quick Access)

What it represents

  • An evolution of the existing "My current subscribed lectures" list, presented as a grid of cards for easier access.

Content:

  • Each card displays the lecture title and instructor.
  • Provides quick-access icons or links to key areas for that lecture, such as Announcements, Submissions, and the Forum.

4) "Recent Activity" Widget (Notifications)

What it represents

  • A feed showing recent events and feedback relevant to the user.

Content:

  • New Grades: Once the new grading system is live, this will show recently published grades.
    • Displays: "Grades for [Assignment Title] are now available."
    • Action: A "View Grade" button linking to the Assessment::Participation details.
  • New Announcements: Shows the 2-3 most recent announcements from the user's subscribed lectures.
    • Displays: A snippet of the announcement text.
    • Action: A link to view the full announcement.

5) "My Tutoring Responsibilities" Widget

What it represents

  • A dedicated section that appears only for students who are also tutors for one or more tutorial groups. This is a key feature for student tutors.

Content:

  • A list of their assigned tutorial groups (e.g., "Advanced Programming - Tutorial Group 5").
  • For each group, provides direct links to:
    • "View Roster": See the list of students in their group.
    • "Grade Submissions": A direct link to the grading UI, pre-filtered for their group's submissions.

6) Phased Implementation Strategy

The dashboard's development is designed to be non-disruptive and deliver value incrementally.

  • Phase A (During Current Semester):

    • Action: Build the dashboard shell, controller, and the "My Courses" and "Assignment Deadlines" widgets using existing data sources.
    • Placement: This can be developed in parallel with Steps 1-4 of the main implementation plan.
    • Outcome: Students in the current semester get an improved, more organized landing page immediately.
  • Phase B (Before Next Semester):

    • Action: Implement and activate the "Open Registrations", "Recent Activity" (grades), and "My Tutoring Responsibilities" widgets, connecting them to the new backend models.
    • Placement: This work should be done after Steps 5-8 of the main plan are functionally complete.
    • Outcome: The dashboard is fully functional for the start of the next semester, seamlessly displaying information from the new systems.

7) UI Mockup (Placeholder)

graph TD
    subgraph Dashboard
        direction TB
        subgraph "My Tutoring Responsibilities"
            G["Tutorial: Adv. Programming G5<br/>(Links: Roster, Grade)"]
        end
        subgraph "What's Next?"
            A["Open: Tutorial Registration<br/>Closes: Oct 28"]
            B["Due: Homework 3<br/>Due: Oct 30"]
        end
        subgraph "My Courses"
            C["Card: Advanced Programming<br/>(Links: Announce, Submit, Forum)"]
            D["Card: Linear Algebra<br/>(Links: Announce, Submit, Forum)"]
        end
        subgraph "Recent Activity"
            E["New Grade: Homework 2"]
            F["New Announcement: AP"]
        end
    end

Overarching Strategy: Parallel, Non-Disruptive Implementation

The core principle of this plan is to build the entire new registration and grading system in parallel with the existing, operational one. The new features will be built against new database tables and services and will only be activated for courses in the next academic semester. This ensures zero disruption to students and staff using the platform for the current, ongoing semester. Frontend development is integrated into each step, delivering complete "vertical slices" of functionality.

Workstreams and step repetition

We implement major areas as separate workstreams (Registration, Grading, Dashboards, Student Performance). Each workstream has a Foundations phase. In this plan: Registration foundations are at Step 2; Grading foundations are at Step 7; Student Performance foundations are at Step 11. Dashboards have partial integration at Step 10 and complete integration at Step 13. Foundations are schema-only for each workstream; controllers/services and UI arrive in subsequent steps. The PR Roadmap chapter provides a concrete crosswalk for the Registration workstream.

Visual Implementation Roadmap

graph TD
    subgraph "Phase 1: Tutorial/Talk Registration"
        S1["1. Dashboard Shell"] --> S2["2. Registration Foundations"];
        S2 --> S3["3. FCFS Mode"];
        S3 --> S4["4. Preference-Based Mode"];
        S4 --> S5["5. Roster Maintenance"];
    end

    subgraph "Phase 2: Grading & Assessments"
        S5 --> S6["6. Grading Foundations"];
        S6 --> S7["7. Assessments (Formalize)"];
        S7 --> S8["8. Assignment Grading"];
        S8 --> S9["9. Participation Tracking"];
    end

    subgraph "Phase 3: Dashboard Integration (Partial)"
        S9 --> S10["10. Dashboard Impl. (Partial)"];
    end

    subgraph "Phase 4: Student Performance & Exam Registration"
        S10 --> S11["11. Student Performance System"];
        S11 --> S12["12. Exam Registration"];
        S12 --> S13["13. Dashboard Extension"];
    end

    subgraph "Phase 5: Quality & Hardening"
        S13 --> S14["14. Quality & Hardening"];
    end

    style S10 fill:#cde4ff,stroke:#5461c8
    style S13 fill:#cde4ff,stroke:#5461c8

The 14-Step Implementation Plan

  1. [Dashboards] Dashboard Shell & Flags Action: Introduce Student + Teacher/Editor dashboard controllers, blank widgets, navigation entries. All new feature areas render as disabled cards until their step enables them.

    Incremental widgets

    As Steps 3–4 land, expose lightweight widget endpoints and add hidden dashboard cards behind feature flags to exercise data paths. The dedicated dashboards step later enables these by default and adds polish.

  2. [Registration] Foundations (Additive Schema per Workstream) Active workstream: Registration. Action: Create only the new tables and AR models for the active workstream. This step is purely backend and involves no UI changes.

    For the Registration workstream this includes registration_campaigns, registration_items, registration_user_registrations, and registration_policies. Grading- and exam-related tables are added later when those workstreams are active.

    Also implement Registration::PolicyEngine with core policy kinds (institutional_email, prerequisite_campaign) and introduce core concerns for controllers to target in the next step:

    • Registration::Campaignable for hosts of campaigns; include in Lecture.
    • Registration::Registerable for assignable targets; include in Tutorial. Provide interface stubs such as materialize_allocation! and allocated_user_ids.

    Exam registration deferred

    Exam registration and student performance policies are deferred to Steps 11-12. This step focuses on tutorial/talk registration only.

    Non-Disruptive Impact

    This step is purely additive. It creates new, unused tables and models scoped to the active workstream. It does not alter existing tables (assignments, submissions, etc.) serving the current semester.

    Crosswalk

    See "Implementation PR Roadmap" for the Registration workstream's Step 2 breakdown.

  3. [Registration] Open FCFS Tutorial/Talk Campaigns Action: Implement the backend controllers and frontend UIs for the FCFS registration mode. This includes creating teacher/editor UIs to set up and manage campaigns and student UIs to view and register for items. FCFS logic uses simple capacity checks (no complex allocation).

    Tip

    Prerequisites: Step 2 (schema, policy engine, core concerns included in Lecture and Tutorial).

    Controllers: Wire Registration::CampaignsController, Registration::UserRegistrationsController, and Registration::PoliciesController (HTML + Turbo Frames/Streams).

    Also add minimal dashboard widget data endpoints (counts/status) and update hidden cards under feature flags.

    Scope for MVP

    Initial FCFS rollout targets Tutorials and Talks. Exam registration is deferred to Step 12.

    Non-Disruptive Impact

    This new workflow is only triggered when a Registration::Campaign is created for a course. Since you will only create these campaigns for next semester's courses, the current semester's courses will continue to function entirely on the old logic.

  4. [Registration] Preference-Based Mode (incl. Solver & Finalization) Action: Deliver preference-based registration, building on FCFS foundations. Implement student ranking UI and persistence, roster foundations for finalize (minimal persistence/service so materialize_allocation! can replace roster memberships), and solver integration with finalize wiring end-to-end.

    Controllers: Add Registration::AllocationController for trigger/retry/finalize and Turbo updates from background jobs.

    Also update hidden dashboard cards to surface preference-based counters and latest results when enabled via feature flags.

    Ordering

    Build roster foundations before implementing finalize!, since materialize_allocation! replaces roster memberships. Add source_campaign_id to roster join tables for tracking.

    Non-Disruptive Impact

    Like FCFS, preference-based logic runs only for new Registration::Campaigns and does not affect the live semester.

  5. [Registration] Roster Maintenance (UI & Operations) Action: Implement Roster::MaintenanceController and Roster::MaintenanceService with an admin-facing UI for post-allocation roster management (moves, adds/removes) with capacity enforcement. Finalize the UX:

    • Candidates panel lives on the Roster Overview (not on Detail) and lists unassigned users from a selected, completed campaign.

    • Provide a manual "Add student" action on Overview.

    • Tutor view is read-only; exams do not show a candidates panel.

      Also add RecountAssignedJob for integrity. Finalize abilities so tutors see read-only Detail for their groups. Add a hidden dashboard widget for teacher/editor with roster links and counts.

      Non-Disruptive Impact

      Operates only on rosters materialized from new campaigns. Current semester rosters remain untouched.

  6. [Grading] Grading Foundations (Schema) Action: Create all grading-related tables and AR models. This includes core assessment tables (assessment_assessments, assessment_tasks, assessment_participations, assessment_task_points), and grade scheme tables. Optional multiple-choice support fields can also be added here to keep schema changes concentrated.

    Exam and performance tables deferred

    Exam-related tables (exams) and student performance tables (student_performance_records, student_performance_certifications, etc.) are deferred to Steps 11-12. This step focuses on assignment grading only.

    Non-Disruptive Impact

    This step is purely additive. It creates new, unused tables and models for the Grading workstream. It does not alter existing live semester tables.

  7. [Grading] Assessments (Formalize Assignment as Assessable) Action: Run a background migration to create a corresponding Assessment::Assessment record for each existing Assignment. Expose controllers for read-only exploration.

    Controllers: Assessment::AssessmentsController (CRUD, read-only views) and Assessment::ParticipationsController (read-only). These become fully interactive after Step 8.

    Non-Disruptive Impact

    The new assessment tables are created in parallel. The migration links existing Assignment records to the new system without altering any existing data or behavior for the current semester.

  8. [Grading] Grading Flow & Submission Fan-out Action: Introduce the backend Assessment::GradingService. Build new grading UIs for instructors and TAs where they can view submissions and enter points. This UI will call the new service to save points and grades to the new tables (assessment_participations, assessment_task_points).

    Controllers: Enable Assessment::GradingController and Assessment::ParticipationsController. Add publish_results and unpublish_results actions on Assessment::AssessmentsController.

    Non-Disruptive Impact

    This is a completely new UI and backend service. It will be deployed but not made accessible for current semester courses. The existing submission viewing UI remains untouched for the live semester.

  9. [Grading] Participation Tracking Action: Implement Achievement model as a new assessable type for tracking non-graded participation (presentations, attendance). Build UI for teachers to mark achievements and for students to view their progress.

    Controllers: Add Assessment::AchievementsController for CRUD and Assessment::ParticipationsController extensions for achievement marking.

    Non-Disruptive Impact

    This is entirely new functionality with no dependencies on existing data. Will be used for next semester courses only.

  10. [Dashboards] Dashboard Implementation (Partial) Action: Implement initial versions of Student Dashboard and Teacher/Editor Dashboard with widgets for tutorial/talk registration, assignment grading, and roster management. Lecture performance and exam registration widgets remain hidden.

    Controllers: DashboardsController (student/teacher views) with widget partials for completed workstreams (Steps 2-9).

    Incomplete coverage

    Dashboards will not show exam eligibility or certification status yet. These widgets are added in Step 13.

    Non-Disruptive Impact

    Provides immediate UX improvement for all users. Widgets for new features show data from new tables only.

  11. [Student Performance] System Foundations Action: Create student performance tables and models: student_performance_records, student_performance_rules, student_performance_achievements, and student_performance_certifications. Implement StudentPerformance::ComputationService to materialize Records from assessment data. Implement StudentPerformance::Evaluator to generate certification proposals. Build teacher certification workflow UI.

    Controllers: StudentPerformance::RecordsController (factual data display), StudentPerformance::CertificationsController (teacher certification workflow), and StudentPerformance::EvaluatorController (proposal generation).

    No policy integration yet

    The student_performance policy kind is added in Step 12 when exam registration is implemented.

    Non-Disruptive Impact

    Creates new tables for performance tracking. Does not affect existing semester data.

  12. [Exam] Registration & Certification Integration Action: Create Exam model with cross-cutting concerns:

    • Registration::Campaignable (host campaigns)
    • Registration::Registerable (be registered for)
    • Roster::Rosterable (manage registrants)
    • Assessment::Assessable (link to grading)

    Add student_performance policy kind to Registration::PolicyEngine. Implement pre-flight certification checks in Registration::CampaignsController (before open) and Registration::AllocationController (before finalize). Wire exam grading to assessment system and implement GradeScheme::Applier.

    Controllers: ExamsController (CRUD, scheduling), GradeScheme::SchemesController (preview/apply), and updates to Registration::CampaignsController for certification checks.

    Extension: Multiple Choice

    MC exam support can be added as optional extension after core functionality is stable.

    Non-Disruptive Impact

    Final piece of new grading workflow. Only used for next semester exams.

  13. [Dashboards] Dashboard Extension (Complete) Action: Add student performance and exam registration widgets to dashboards. Connect "Exam Eligibility Status", "Certification Pending List", and "Performance Overview" to backend services from Steps 11-12.

    Controllers: Extend DashboardsController with widgets for lecture performance and exam registration.

    Non-Disruptive Impact

    Completes dashboard functionality for next semester. All widgets read from new tables only.

  14. [Quality] Hardening & Integrity Action: Create backend jobs for data integrity and reporting (PerformanceRecordUpdateJob, CertificationStaleCheckJob, AllocatedAssignedMatchJob). Build admin dashboards and reporting views.

    Non-Disruptive Impact

    Maintenance jobs operate exclusively on new tables without touching live production data.

Implementation PR Roadmap

This chapter breaks down the Registration system into small, reviewable pull requests. It complements the Implementation Plan with concrete PR scopes, dependencies, and acceptance criteria.

Info

Guiding principles:

  • Keep PRs tight and shippable behind feature flags.
  • Prefer vertical slices that produce visible value.
  • Add tests and docs incrementally with each PR.

Tip

Plan ↔ PR crosswalk (Registration workstream):

  • Step 2 — Foundations (schema/backend): PR-2.x
  • Step 3 — FCFS mode (admin + student): PR-3.x
  • Step 4 — Preference-based mode (student + allocation): PR-4.x
  • Step 5 — Roster maintenance: PR-5.x

Abstract

Registration — Step 2: Foundations (Schema)

PR-2.1 — Schema and core models

  • Scope: AR models Registration::Campaign, Item, UserRegistration, Policy and additive migrations.
  • Migrations:
    • 20251028000000_create_registration_campaigns.rb
    • 20251028000001_create_registration_items.rb
    • 20251028000002_create_registration_user_registrations.rb
    • 20251028000003_create_registration_policies.rb
  • Refs: Models — Campaign, Item, UserRegistration, Policy
  • Acceptance: Migrations run cleanly; models have correct associations and validations; no existing tables altered.

PR-2.2 — Policy engine (institutional_email, prerequisite_campaign)

  • Scope: Registration::PolicyEngine with two policy kinds.
  • Implementation: Registration::PolicyEngine#evaluate_policies_for, Registration::Policy#evaluate for institutional_email and prerequisite_campaign kinds.
  • Test doubles: Tests use doubles for checking roster membership in prerequisite campaigns.
  • Refs: PolicyEngine, Policy#evaluate
  • Acceptance: Policy engine evaluates ordered policies with short-circuit; tests pass with doubled roster data; student_performance policy kind deferred to Step 11.

PR-2.3 — Core concerns (Campaignable, Registerable)

  • Scope: Include Registration::Campaignable in Lecture and Registration::Registerable in Tutorial.
  • Implementation:
    • Campaignable: has_many :registration_campaigns, as: :campaignable
    • Registerable: capacity, allocated_user_ids (raises NotImplementedError), materialize_allocation! (raises NotImplementedError)
  • Refs: Campaignable, Registerable
  • Acceptance: Tutorial includes Registerable; methods raise NotImplementedError when called; no functional changes to existing semester.

PR-2.4 — Seminar/Talk support (Optional, can defer post-MVP)

  • Scope: Include Registration::Campaignable in Seminar and Registration::Registerable in Talk.
  • Same pattern as PR-2.3 but for seminars.
  • Refs: Same concerns, different models.
  • Acceptance: Same as PR-2.3 for seminar context.

Abstract

Registration — Step 3: FCFS Mode

PR-3.1 — Admin: Campaigns scaffold (CRUD + open/close)

  • Scope: Teacher/editor UI for campaign lifecycle (draft → open → closed → processing → completed).
  • Controllers: Registration::CampaignsController (new/create/edit/update/show/destroy).
  • Actions: open (validates policies, updates status to :open), close (background job triggers status → :closed), reopen (reverts to :open if allocation not started).
  • Freezing: Campaign-level attributes freeze on lifecycle transitions (allocation_mode, registration_opens_at after draft; policies freeze on open).
  • UI: Turbo Frames for inline editing; flash messages for validation errors and freeze violations; feature flag registration_campaigns_enabled; disabled fields for frozen attributes.
  • Refs: Campaign lifecycle & freezing, State diagram
  • Acceptance: Teachers can create draft campaigns, add policies, open campaigns (with policy validation); campaigns cannot be deleted when open/processing; freezing rules enforced with clear error messages; frozen fields disabled in UI; feature flag gates UI.

PR-3.2 — Admin: Items CRUD (nested under Campaign)

  • Scope: Manage registerable items within a campaign.
  • Controllers: Registration::ItemsController (nested routes under campaigns).
  • Freezing: Items cannot be removed when status != :draft (prevents invalidating existing registrations); adding items always allowed.
  • UI: Turbo Frames for inline item addition/removal; capacity editing; delete button disabled for items when campaign is open.
  • Refs: Item model, Freezing rules
  • Acceptance: Teachers can add items anytime; items cannot be removed if campaign is open or has registrations for that item; capacity edits validated.

PR-3.3 — Student: Register (single-item FCFS)

  • Scope: Student registration for single-item campaigns (e.g., one tutorial per lecture).
  • Controllers: Registration::UserRegistrationsController (create/destroy).
  • Logic: FCFS mode with capacity checks; policy evaluation on create.
  • Freezing: Item capacity can increase anytime; can decrease only if new_capacity >= confirmed_count (prevents revoking confirmed spots).
  • UI: Registration button; Turbo Stream updates for immediate feedback; capacity editing validates against confirmed count.
  • Refs: FCFS mode, Freezing rules
  • Acceptance: Students can register for open campaigns; capacity enforced; policy violations shown with error messages; confirmed status set immediately; capacity decrease blocked if it would revoke spots.

PR-3.4 — Student: Register (multi-item FCFS)

  • Scope: Extend PR-3.3 for multi-item campaigns (e.g., tutorial selection from multiple options).
  • Controllers: Extend Registration::UserRegistrationsController to handle item selection.
  • UI: Item selection dropdown; Turbo Stream for dynamic item list updates.
  • Refs: Multi-item campaigns
  • Acceptance: Students can select from available items; capacity per item enforced; switching items updates previous registration.

Abstract

Registration — Step 4: Preference-Based Mode

PR-4.1 — Student: Preference ranking UI

  • Scope: UI for students to rank items by preference.
  • Controllers: Extend Registration::UserRegistrationsController with update_preferences action.
  • UI: Drag-and-drop ranking interface; persisted as JSONB array in preferences column.
  • Refs: Preference mode
  • Acceptance: Students can rank items; preferences saved; cannot submit incomplete rankings.

PR-4.2 — Roster foundations (minimal persistence)

  • Scope: Implement minimal roster persistence for materialize_allocation!.
  • Models: Add source_campaign_id to tutorial_memberships join table.
  • Concerns: Implement Roster::Rosterable with allocated_user_ids, materialize_allocation!, roster_entries, mark_campaign_source!.
  • Implementation in Tutorial: Override allocated_user_ids to delegate to roster_user_ids; implement materialize_allocation! using replace_roster! pattern.
  • Refs: Rosterable concern, Tutorial implementation
  • Acceptance: Tutorial implements Rosterable methods; materialize_allocation! replaces roster entries tagged with campaign; allocated_user_ids returns current roster user IDs.

PR-4.3 — Solver + finalize (integrate allocation engine)

  • Scope: Wire solver and finalize end-to-end.
  • Services: Registration::Allocation::Solver (delegates to CP-SAT or placeholder greedy), Registration::FinalizationGuard.
  • Controllers: Registration::AllocationController (trigger/retry/finalize actions).
  • Background: AllocationJob runs solver and updates UserRegistration statuses via Turbo Streams.
  • Logic: On finalize, call FinalizationGuard#check!, then materialize_allocation! for each confirmed user.
  • Freezing: Item capacity freezes once status == :completed (results published); can be adjusted freely during draft, open, closed states.
  • Refs: Solver, Finalization, Freezing rules
  • Acceptance: Teachers can trigger allocation; results streamed to UI; finalize materializes rosters; capacity changes blocked after completion; unconfirmed users stay in limbo; confirmed users added to rosters via materialize_allocation!.

Abstract

Registration — Step 5: Roster Maintenance

PR-5.1 — Roster maintenance UI (admin)

  • Scope: Post-allocation roster management.
  • Controllers: Roster::MaintenanceController (move/add/remove actions).
  • UI: Roster Overview with candidates panel (unassigned users from completed campaign); Detail view for individual roster with capacity checks.
  • Refs: Roster maintenance
  • Acceptance: Teachers can move students between rosters; capacity enforced; candidates panel lists unassigned users; manual add/remove actions work.

PR-5.2 — Tutor abilities (read-only roster access)

  • Scope: Tutors can view rosters for their assigned groups.
  • Abilities: Update CanCanCan to allow read-only roster access for tutors.
  • UI: Tutors see Detail view without edit actions.
  • Refs: Abilities
  • Acceptance: Tutors can view rosters for their tutorials; cannot edit; exams do not show candidates panel.

PR-5.3 — Manual add student (from candidates or arbitrary)

  • Scope: Add students to rosters manually.
  • Controllers: Extend Roster::MaintenanceController with add_student action.
  • UI: "Add student" button on Overview; search input for arbitrary student addition.
  • Refs: Manual operations
  • Acceptance: Teachers can add students from candidates or search; capacity enforced; duplicate prevention.

PR-5.4 — Integrity job (assigned/allocated reconciliation)

  • Scope: Background job to verify roster consistency.
  • Job: AllocatedAssignedMatchJob compares Item#assigned_users with Registerable#allocated_user_ids.
  • Monitoring: Logs mismatches for admin review.
  • Refs: Integrity invariants
  • Acceptance: Job runs nightly; reports mismatches; no auto-fix (manual review required).

Abstract

Grading — Step 6: Foundations (Schema)

PR-6.1 — Assessment schema (core tables)

  • Scope: Create assessment_assessments, assessment_tasks, assessment_participations, assessment_task_points.
  • Migrations:
    • 20251105000000_create_assessment_assessments.rb
    • 20251105000001_create_assessment_tasks.rb
    • 20251105000002_create_assessment_participations.rb
    • 20251105000003_create_assessment_task_points.rb
  • Refs: Assessment models
  • Acceptance: Migrations run; models have correct associations; no existing tables altered.

PR-6.2 — Grade scheme schema

  • Scope: Create grade_schemes and grade_scheme_thresholds.
  • Migrations:
    • 20251105000004_create_grade_schemes.rb
    • 20251105000005_create_grade_scheme_thresholds.rb
  • Refs: GradeScheme models
  • Acceptance: Migrations run; models have correct validations; percentage-based thresholds supported.

Abstract

Grading — Step 7: Assessments (Formalize Assignments)

PR-7.2 — Assessment controllers (read-only exploration)

  • Scope: CRUD for assessments and participations.
  • Controllers: Assessment::AssessmentsController, Assessment::ParticipationsController (read-only for now).
  • UI: Index/show views for assessments; participation list per assessment.
  • Refs: Assessment controllers
  • Acceptance: Teachers can view assessments and participations; no grading UI yet; feature flag gates access.

Abstract

Grading — Step 8: Assignment Grading

PR-8.1 — Grading service (backend)

  • Scope: Assessment::GradingService for saving points and grades.
  • Implementation: Fanout pattern creates Participation and TaskPoints per student (or team).
  • Refs: GradingService
  • Acceptance: Service creates participations and task points; handles team grading; validates point ranges.

PR-8.2 — Grading UI (teacher/TA)

  • Scope: Grading interface for entering points.
  • Controllers: Assessment::GradingController (new/create/update).
  • UI: Grid view with students × tasks; inline editing; Turbo Frames for updates.
  • Refs: Grading UI mockup
  • Acceptance: Teachers can enter points; service called on save; results preview shown; feature flag gates UI.

PR-8.3 — Publish/unpublish results

  • Scope: Toggle result visibility for students.
  • Controllers: Extend Assessment::AssessmentsController with publish_results and unpublish_results actions.
  • UI: Toggle button on assessment show page.
  • Refs: Publication workflow
  • Acceptance: Teachers can publish/unpublish results; students see results only when published.

Abstract

Grading — Step 9: Participation Tracking

PR-9.1 — Achievement model (new assessable type)

  • Scope: Create Achievement as assessable for non-graded participation.
  • Model: Achievement with value_type (boolean/numeric/percentage).
  • Refs: Achievement model
  • Acceptance: Achievement model exists; can be linked to assessments; value_type validated.

PR-9.2 — Achievement marking UI

  • Scope: UI for teachers to mark achievements.
  • Controllers: Extend Assessment::ParticipationsController with achievement marking actions.
  • UI: Checkbox/numeric input for marking; student list view.
  • Refs: Participation tracking
  • Acceptance: Teachers can mark achievements; students see progress; feature flag gates UI.

Abstract

Dashboards — Step 10: Partial Integration

PR-10.1 — Student dashboard (partial)

  • Scope: Student dashboard with widgets for registrations, grades, deadlines.
  • Controllers: Dashboards::StudentController with widget partials.
  • Widgets: "My Registrations", "Recent Grades", "Upcoming Deadlines".
  • Refs: Student dashboard mockup
  • Acceptance: Students see dashboard; widgets show data from new tables; exam eligibility widget hidden (added in Step 13).

PR-10.2 — Teacher/editor dashboard (partial)

  • Scope: Teacher dashboard with widgets for campaigns, rosters, grading.
  • Controllers: Dashboards::TeacherController with widget partials.
  • Widgets: "Open Campaigns", "Roster Management", "Grading Queue".
  • Refs: Teacher dashboard mockup
  • Acceptance: Teachers see dashboard; widgets show actionable items; certification widget hidden (added in Step 13).

Abstract

Student Performance — Step 11: System Foundations

PR-11.1 — Performance schema (Record, Rule, Achievement, Certification)

  • Scope: Create student_performance_records, student_performance_rules, student_performance_achievements, student_performance_certifications.
  • Migrations:
    • 20251120000000_create_student_performance_records.rb
    • 20251120000001_create_student_performance_rules.rb
    • 20251120000002_create_student_performance_achievements.rb
    • 20251120000003_create_student_performance_certifications.rb
  • Refs: Student Performance models
  • Acceptance: Migrations run; models have correct associations; unique constraints on certifications.

PR-11.2 — Computation service (materialize Records)

  • Scope: StudentPerformance::ComputationService to aggregate performance data.
  • Implementation: Reads from assessment_participations and assessment_task_points; writes to student_performance_records.
  • Refs: ComputationService
  • Acceptance: Service computes points and achievements; upserts Records; handles missing data gracefully.

PR-11.3 — Evaluator (proposal generator)

  • Scope: StudentPerformance::Evaluator to generate certification proposals.
  • Implementation: Reads Records and Rules; returns proposed status (passed/failed) per student.
  • Refs: Evaluator
  • Acceptance: Evaluator generates proposals; does NOT create Certifications; used for bulk UI only.

PR-11.4 — Records controller (factual data display)

  • Scope: StudentPerformance::RecordsController for viewing performance data.
  • Controllers: Index/show actions for Records.
  • UI: Table view with points, achievements, computed_at timestamp.
  • Refs: RecordsController
  • Acceptance: Teachers can view Records; no decision-making UI; feature flag gates access.

PR-11.5 — Certifications controller (teacher workflow)

  • Scope: StudentPerformance::CertificationsController for teacher certification.
  • Controllers: Index (dashboard), create (bulk), update (override), bulk_accept.
  • UI: Certification dashboard with proposals; bulk accept/reject; manual override with notes.
  • Refs: CertificationsController
  • Acceptance: Teachers can review proposals; bulk accept; override with manual status; remediation workflow for stale certifications.

PR-11.6 — Evaluator controller (proposal endpoints)

  • Scope: StudentPerformance::EvaluatorController for proposal generation.
  • Controllers: bulk_proposals, preview_rule_change, single_proposal.
  • UI: Modal for rule change preview showing diff of affected students.
  • Refs: EvaluatorController
  • Acceptance: Teachers can generate proposals; preview rule changes; does NOT create Certifications automatically.

Abstract

Exam — Step 12: Registration & Certification Integration

PR-12.1 — Exam model (cross-cutting concerns)

  • Scope: Create Exam model with concerns.
  • Concerns: Registration::Campaignable, Registration::Registerable, Roster::Rosterable, Assessment::Assessable.
  • Implementation: materialize_allocation! delegates to replace_roster!; allocated_user_ids returns roster user IDs.
  • Refs: Exam model
  • Acceptance: Exam includes all concerns; methods implemented; no functional changes to existing exams.

PR-12.2 — Lecture performance policy (add to engine)

  • Scope: Add student_performance policy kind to Registration::PolicyEngine.
  • Implementation: Registration::Policy#eval_student_performance checks StudentPerformance::Certification.find_by(...).status.
  • Phase awareness: Returns different errors for registration (missing/pending) vs finalization (failed).
  • Refs: Policy evaluation
  • Acceptance: Policy checks Certification table; phase-aware logic; tests use Certification doubles.

PR-12.3 — Pre-flight checks (campaign open/finalize)

  • Scope: Add certification completeness checks to campaign lifecycle.
  • Controllers: Update Registration::CampaignsController#open to check for missing/pending certifications; block if incomplete.
  • Update Registration::AllocationController#finalize to check for missing/pending; auto-reject failed certifications.
  • Refs: Pre-flight validation
  • Acceptance: Campaigns cannot open without complete certifications; finalization blocked if pending; failed certifications auto-rejected.

PR-12.4 — Exam FCFS registration

  • Scope: Exam registration with student_performance policy.
  • Controllers: Extend Registration::UserRegistrationsController for exam context.
  • UI: Registration button with eligibility status display.
  • Refs: Exam registration flow
  • Acceptance: Students can register for exams; policy blocks ineligible users; clear error messages; feature flag gates UI.

PR-12.5 — Grade scheme application

  • Scope: Apply grading schemes to exam results.
  • Service: GradeScheme::Applier to map points to grades.
  • Controllers: GradeScheme::SchemesController (preview/apply).
  • Refs: GradeScheme applier
  • Acceptance: Teachers can apply schemes; preview grade distribution; grades saved to participations.

Abstract

Dashboards — Step 13: Complete Integration

PR-13.1 — Student dashboard extension

  • Scope: Add student performance and exam registration widgets.
  • Widgets: "Exam Eligibility Status", "Performance Overview".
  • Refs: Student dashboard complete
  • Acceptance: Students see eligibility status; performance summary; links to certification details.

PR-13.2 — Teacher dashboard extension

  • Scope: Add certification and exam management widgets.
  • Widgets: "Certification Pending List", "Eligibility Summary".
  • Refs: Teacher dashboard complete
  • Acceptance: Teachers see pending certifications; summary of eligible students; links to remediation UI.

Abstract

Quality — Step 14: Hardening & Integrity

PR-14.1 — Background jobs (performance/certification)

  • Scope: Create integrity jobs for student performance.
  • Jobs: PerformanceRecordUpdateJob (recompute Records after grading), CertificationStaleCheckJob (flag stale certifications), AllocatedAssignedMatchJob (verify roster consistency).
  • Refs: Background jobs
  • Acceptance: Jobs run on schedule; log issues; no auto-fix for critical data.

PR-14.2 — Admin reporting (integrity dashboard)

  • Scope: Admin UI for monitoring data integrity.
  • Controllers: Admin::IntegrityController with dashboard views.
  • Widgets: Pending certifications, stale certifications, roster mismatches.
  • Refs: Monitoring
  • Acceptance: Admins see integrity metrics; drill-down to affected records; export reports.

Parallelization Strategy

This chapter outlines how multiple developers can work on the Implementation Plan simultaneously. It identifies parallelization opportunities, conflict hotspots, and coordination strategies for efficient team collaboration.

High Parallelization Potential

Steps 3 (FCFS mode) and 5 (Roster maintenance) allow up to 3 developers to work concurrently on independent PRs.

Overview

The Implementation Plan consists of 14 steps across 4 phases. Some steps must be sequential due to hard dependencies, while others can be highly parallelized. With a 3-developer team, strategic work distribution can significantly reduce total implementation time.

Step-by-Step Parallelization

Step 2: Foundations (Sequential)

Parallelization level: 1 developer (sequential)

Sequence:

  1. PR-2.1 (Schema) → PR-2.2 (PolicyEngine) → PR-2.3 (Concerns)

Why sequential? Each PR builds directly on the previous. The schema must exist before the PolicyEngine can reference it; concerns depend on schema models.

Optional parallel work:

  • PR-2.4 (Talk as Registerable) can proceed after PR-2.3 if seminars are in scope for MVP. Talk registration follows the same pattern as Tutorial registration and can be implemented by a second developer in parallel with Tutorial-focused PRs in Step 3.
graph LR
    PR21[PR-2.1<br/>Schema] --> PR22[PR-2.2<br/>PolicyEngine]
    PR22 --> PR23[PR-2.3<br/>Concerns]
    PR23 -.->|optional| PR24[PR-2.4<br/>Talk]

    style PR21 fill:#ff9999
    style PR22 fill:#ff9999
    style PR23 fill:#ff9999
    style PR24 fill:#ffcc99

Step 3: FCFS Mode (High Parallelization)

Parallelization level: Up to 3 developers

Phase 3a: Admin & Student Foundations

Parallel tracks (2 developers):

TrackPRDeveloper Focus
APR-3.1Admin scaffold (campaigns/policies CRUD)
BPR-3.2Student index (tabs/filters)

Prerequisites: PR-2.3 must be merged.

Why parallel? Both PRs implement different controllers (CampaignsController vs UserRegistrationsController) with no shared code paths.

Merge order: Either can merge first; no dependencies between them.

Phase 3b: Student FCFS Flows

Parallel tracks (2 developers):

TrackPRFlow Type
APR-3.3FCFS single-item campaigns
BPR-3.4FCFS multi-item picker

Prerequisites: PR-3.1 and PR-3.2 merged.

Why parallel? Both implement different branches of UserRegistrationsController#show logic. They share the controller file but modify different action branches based on campaign configuration.

Conflict management:

  • Each PR adds distinct routes (register_single, register_multi)
  • Shared private methods (ensure_eligible!, enforce_capacity!) can be extracted by the first PR to merge
  • Last PR to merge handles route file conflicts (rebase before merge)

Merge strategy: Flexible order; coordinate in daily standup.

Exam registration deferred

PR-3.5 (policy-gated exam registration) has been moved to Step 12. Step 3 focuses on tutorial and talk registration only.

graph TD
    PR23[PR-2.3<br/>Concerns]

    subgraph "Phase 3a: Parallel (2 devs)"
        PR31[PR-3.1<br/>Admin scaffold]
        PR32[PR-3.2<br/>Student index]
    end

    subgraph "Phase 3b: Parallel (2 devs)"
        PR33[PR-3.3<br/>FCFS single]
        PR34[PR-3.4<br/>FCFS multi]
    end

    PR23 --> PR31
    PR23 --> PR32
    PR31 --> PR33
    PR31 --> PR34
    PR32 --> PR33
    PR32 --> PR34

    style PR31 fill:#99ccff
    style PR32 fill:#99ccff
    style PR33 fill:#90ee90
    style PR34 fill:#90ee90

Step 4: Preference-Based (Mixed Parallelization)

Parallelization level: 2-3 developers depending on phase

Phase 4a: UI & Persistence Foundations

Parallel tracks (2 developers):

TrackPRPurpose
APR-4.1Student preference ranking UI
BPR-4.2Roster foundations (models + service)

Prerequisites: Step 3 complete.

Why parallel? PR-4.1 uses the stubbed materialize_allocation! interface from PR-2.3. PR-4.2 implements the real roster persistence. They don't conflict because PR-4.1 only reads the interface.

Optional parallel work:

  • Developer C can research solver libraries (MCMF vs CP-SAT) and draft PR-4.3 structure while waiting for PR-4.2 to merge.

Phase 4b: Solver Integration (Draft in Parallel, Merge Sequentially)

Single track (1 developer):

PRDependencies
PR-4.3PR-4.2 must be merged (needs roster persistence)

Why sequential merge? The solver's finalize! method calls materialize_allocation!, which writes to roster tables created in PR-4.2. This is a hard dependency for merging.

But drafting can be parallel: Developer can write solver logic with stubbed materialize_allocation! calls while PR-4.2 is in review. Only the final merge requires PR-4.2 to land first.

Parallel work during PR-4.3:

  • Developer B: Draft views for PR-4.4 (allocation controller UI)
  • Developer C: Write integration test suite for allocation flow

Phase 4c: Allocation UI & Wiring

Parallel tracks (2 developers):

TrackPRDependencies
APR-4.4PR-4.3 merged
BPR-4.5PR-4.3 merged (can draft in parallel with 4.4)

Why parallel? PR-4.4 adds teacher UI for allocation operations. PR-4.5 wires student-facing result views. Minimal overlap.

Merge order: PR-4.4 → PR-4.5 (preferred but flexible).

Dashboard widgets deferred

Dashboard widgets for registration/allocation are now part of Step 10 (Dashboard Partial), not incremental additions in Steps 3-4.

graph TD
    PR36[PR-3.6<br/>Dashboard]

    subgraph "Phase 4a: Parallel (2 devs)"
        PR41[PR-4.1<br/>Student prefs UI]
        PR42[PR-4.2<br/>Roster foundations]
    end

    subgraph "Phase 4b: Sequential (bottleneck)"
        PR43[PR-4.3<br/>Solver integration]
    end

    subgraph "Phase 4c: Parallel (2 devs)"
        PR44[PR-4.4<br/>Allocation controller]
        PR45[PR-4.5<br/>Post-allocation wiring]
    end

    PR36 --> PR41
    PR36 --> PR42
    PR42 --> PR43
    PR43 --> PR44
    PR43 --> PR45

    style PR41 fill:#99ccff
    style PR42 fill:#99ccff
    style PR43 fill:#ff9999
    style PR44 fill:#90ee90
    style PR45 fill:#90ee90

Step 5: Roster Maintenance (High Parallelization)

Parallelization level: Up to 3 developers

Phase 5a: Foundation Work

Parallel tracks (2 developers):

TrackPRPurpose
APR-5.1Read-only roster controller + views
BPR-5.4Counters + integrity job

Prerequisites: PR-4.2 must be merged (roster infrastructure).

Why parallel? Both read from roster tables but don't modify them. PR-5.1 displays rosters, PR-5.4 counts participants. No write conflicts.

Merge order: Flexible; PR-5.1 should merge first to unblock Phase 5b.

Phase 5b: Operations & Permissions

Parallel tracks (2 developers):

TrackPRPurpose
APR-5.2Edit operations (remove/move)
BPR-5.5Permissions + tutor read-only variant

Prerequisites: PR-5.1 merged.

Why parallel? PR-5.2 adds controller actions for edit operations. PR-5.5 adds authorization rules (abilities) and conditional UI. Low conflict risk because they touch different layers.

Merge order: Either can merge first.

Parallel draft work:

  • Developer C: Start PR-5.3 draft (candidates panel) while waiting for PR-5.2.

Phase 5c: Candidates Panel

Single track:

PRDependencies
PR-5.3PR-5.2 merged (needs edit operations to assign candidates)
graph TD
    PR42[PR-4.2<br/>Roster foundations]

    subgraph "Phase 5a: Parallel (2 devs)"
        PR51[PR-5.1<br/>Read-only controller]
        PR54[PR-5.4<br/>Counters + job]
    end

    subgraph "Phase 5b: Parallel (2 devs)"
        PR52[PR-5.2<br/>Edit operations]
        PR55[PR-5.5<br/>Permissions]
    end

    PR53[PR-5.3<br/>Candidates panel]

    PR42 --> PR51
    PR42 --> PR54
    PR51 --> PR52
    PR51 --> PR55
    PR52 --> PR53

    style PR51 fill:#99ccff
    style PR54 fill:#99ccff
    style PR52 fill:#90ee90
    style PR55 fill:#90ee90
    style PR53 fill:#90ee90

Conflict Hotspots

When multiple developers work in parallel, watch these files for merge conflicts:

1. Routes (config/routes.rb)

Why conflicts occur: Multiple PRs add new routes to the same namespace.

Mitigation strategies:

  • Designate a "routes owner": One developer handles all route-related conflicts during merge.
  • Use consistent formatting: Follow Rails conventions for namespace blocks and member/collection actions.
  • Rebase frequently: Pull latest main daily before pushing.
  • Coordinate merge order: Agree in standup which PR merges first.

Example conflict scenario:

# PR-3.3 adds:
post :register_single

# PR-3.4 adds (same location):
post :register_multi

Resolution: Both lines coexist; just order them consistently.


2. Abilities (app/models/ability.rb)

Why conflicts occur: Multiple PRs add authorization rules to the same file or concern.

Mitigation strategies:

  • Split into concerns: Create app/abilities/registration_ability.rb and app/abilities/roster_ability.rb to separate workstreams.
  • Use section comments: Clearly mark sections like # Registration — FCFS mode
  • Group related rules: Keep all rules for one controller together.

Recommended structure:

# app/models/ability.rb
class Ability
  include CanCan::Ability
  include RegistrationAbility
  include RosterAbility
  include AssessmentAbility
  # ...
end

3. Dashboard Components

Why conflicts occur: Dashboard widgets are now consolidated in Step 10 (Dashboard Partial) rather than added incrementally.

Mitigation strategies:

  • Use separate component files: Each widget is its own component (OpenRegistrationsCard, AllocationResultsCard, ManageRostersCard).
  • Feature flag each widget: Enables independent testing without UI conflicts.
  • Coordinate in Step 10: Multiple developers can work on different widgets in parallel during Step 10 implementation.

4. UserRegistrationsController

Why conflicts occur: PRs 3.3 and 3.4 both modify the same controller.

Mitigation strategies:

  • Keep actions separate: Each PR implements distinct actions or branches (if campaign.single_item? vs if campaign.multi_item?)
  • Extract shared methods early: The first PR to merge should extract helpers like ensure_eligible!, enforce_capacity!, build_registration_context.
  • Coordinate merge order: Agree which PR merges first; others rebase and adopt the extracted methods.

Example of method extraction:

# First PR to merge extracts:
private

def ensure_eligible!(campaign)
  result = Registration::PolicyEngine.call(campaign, current_user)
  redirect_to(...) unless result.pass?
end

Later PRs reuse this method instead of duplicating logic.


Steps 6-9: Grading & Assessments

Step 6: Grading Foundations (Sequential)

Parallelization level: 1 developer

Sequence: PR-6.1 (Assessment schema) → PR-6.2 (Grade scheme schema)

Why sequential? Both are purely additive migrations. Can be combined into a single PR or done sequentially. Low complexity.


Step 7: Assessments (Sequential)

Parallelization level: 1 developer

Sequence: PR-7.1 (Migration) → PR-7.2 (Controllers)

Why sequential? Controllers depend on migrated Assessment records existing. Migration must complete first.


Step 8: Assignment Grading (High Parallelization)

Parallelization level: Up to 3 developers

Parallel tracks:

TrackPRPurpose
APR-8.1Grading service (backend)
BPR-8.2Grading UI (teacher/TA)
CPR-8.3Publish/unpublish results

Prerequisites: Step 7 complete.

Why parallel? PR-8.1 is pure service logic (no UI). PR-8.2 builds UI that calls the service (can use doubles initially). PR-8.3 adds toggle actions to existing AssessmentsController.

Merge order: PR-8.1 → PR-8.2 → PR-8.3 (preferred). PR-8.2 can draft with stubbed service calls while PR-8.1 is in review.

graph TD
    PR71[PR-7.1/7.2<br/>Assessments]

    subgraph "Phase 8: Parallel (3 devs)"
        PR81[PR-8.1<br/>Grading service]
        PR82[PR-8.2<br/>Grading UI]
        PR83[PR-8.3<br/>Publish/unpublish]
    end

    PR71 --> PR81
    PR71 --> PR82
    PR71 --> PR83
    PR81 --> PR82
    PR82 --> PR83

    style PR81 fill:#90ee90
    style PR82 fill:#90ee90
    style PR83 fill:#90ee90

Step 9: Participation Tracking (Moderate Parallelization)

Parallelization level: 2 developers

Parallel tracks:

TrackPRPurpose
APR-9.1Achievement model (new assessable type)
BPR-9.2Achievement marking UI

Prerequisites: Step 8 complete.

Why parallel? PR-9.1 creates model and migrations. PR-9.2 builds UI (can draft with stubbed model initially).

Merge order: PR-9.1 → PR-9.2 (PR-9.2 requires model to exist).

graph TD
    PR83[PR-8.3<br/>Publish/unpublish]

    subgraph "Phase 9: Parallel (2 devs)"
        PR91[PR-9.1<br/>Achievement model]
        PR92[PR-9.2<br/>Achievement marking UI]
    end

    PR83 --> PR91
    PR83 --> PR92
    PR91 --> PR92

    style PR91 fill:#99ccff
    style PR92 fill:#99ccff

Step 10: Dashboard (Partial) - High Parallelization

Parallelization level: Up to 2 developers

Parallel tracks:

TrackPRPurpose
APR-10.1Student dashboard (partial)
BPR-10.2Teacher/editor dashboard (partial)

Prerequisites: Steps 2-9 complete.

Why parallel? Completely separate controllers and views. Student dashboard shows registration/grades from student perspective. Teacher dashboard shows campaigns/rosters/grading from admin perspective.

Merge order: Flexible (no dependencies).

graph TD
    PR92[PR-9.2<br/>Achievement marking]

    subgraph "Phase 10: Parallel (2 devs)"
        PR101[PR-10.1<br/>Student dashboard]
        PR102[PR-10.2<br/>Teacher dashboard]
    end

    PR92 --> PR101
    PR92 --> PR102

    style PR101 fill:#90ee90
    style PR102 fill:#90ee90

Steps 11-13: Student Performance & Exam Registration

Step 11: Student Performance System (Very High Parallelization)

Parallelization level: Up to 4 developers

Phase 11a: Schema & Services (3 developers):

TrackPRPurpose
APR-11.1Performance schema (4 tables)
BPR-11.2Computation service (draft in parallel)
CPR-11.3Evaluator (draft in parallel)

Prerequisites: Step 9 complete (needs assessment data).

Why parallel? PR-11.2 and PR-11.3 can be drafted while PR-11.1 is in review using local schema definitions. Merge after PR-11.1 lands.

Phase 11b: Controllers (3 developers):

TrackPRPurpose
APR-11.4Records controller (factual data display)
BPR-11.5Certifications controller (teacher workflow)
CPR-11.6Evaluator controller (proposal endpoints)

Prerequisites: PR-11.1, PR-11.2, PR-11.3 merged.

Why parallel? Three independent controllers with distinct purposes. Minimal shared code.

Merge order: Flexible (PR-11.4 can merge first as it's simplest).

graph TD
    PR102[PR-10.2<br/>Teacher dashboard]

    subgraph "Phase 11a: Schema & Services (3 devs)"
        PR111[PR-11.1<br/>Performance schema]
        PR112[PR-11.2<br/>Computation service]
        PR113[PR-11.3<br/>Evaluator]
    end

    subgraph "Phase 11b: Controllers (3 devs)"
        PR114[PR-11.4<br/>Records controller]
        PR115[PR-11.5<br/>Certifications controller]
        PR116[PR-11.6<br/>Evaluator controller]
    end

    PR102 --> PR111
    PR102 --> PR112
    PR102 --> PR113
    PR111 --> PR112
    PR111 --> PR113
    PR112 --> PR114
    PR112 --> PR115
    PR112 --> PR116
    PR113 --> PR114
    PR113 --> PR115
    PR113 --> PR116

    style PR111 fill:#90ee90
    style PR112 fill:#90ee90
    style PR113 fill:#90ee90
    style PR114 fill:#90ee90
    style PR115 fill:#90ee90
    style PR116 fill:#90ee90

Step 12: Exam Registration (Moderate Parallelization)

Parallelization level: Up to 3 developers

Parallel tracks:

TrackPRPurpose
APR-12.1Exam model (cross-cutting concerns)
BPR-12.2Lecture performance policy (add to engine)
CPR-12.3Pre-flight checks (draft in parallel)

Prerequisites: Step 11 complete.

Why parallel? PR-12.1 creates Exam model. PR-12.2 adds policy kind to existing PolicyEngine. PR-12.3 can draft pre-flight logic (merges after PR-12.1 and PR-12.2).

Sequential continuation:

PRDependencies
PR-12.4PR-12.1, PR-12.2, PR-12.3 merged
PR-12.5PR-12.4 merged
graph TD
    PR116[PR-11.6<br/>Evaluator controller]

    subgraph "Phase 12a: Parallel (3 devs)"
        PR121[PR-12.1<br/>Exam model]
        PR122[PR-12.2<br/>LP policy]
        PR123[PR-12.3<br/>Pre-flight checks]
    end

    subgraph "Phase 12b: Sequential"
        PR124[PR-12.4<br/>Exam FCFS registration]
        PR125[PR-12.5<br/>Grade scheme application]
    end

    PR116 --> PR121
    PR116 --> PR122
    PR116 --> PR123
    PR121 --> PR123
    PR122 --> PR123
    PR123 --> PR124
    PR124 --> PR125

    style PR121 fill:#99ccff
    style PR122 fill:#99ccff
    style PR123 fill:#99ccff
    style PR124 fill:#ff9999
    style PR125 fill:#ff9999

Step 13: Dashboard Extension (Low Parallelization)

Parallelization level: 2 developers

Parallel tracks:

TrackPRPurpose
APR-13.1Student dashboard extension
BPR-13.2Teacher dashboard extension

Prerequisites: Steps 11-12 complete.

Why parallel? Extends existing dashboards from Step 10 with new widgets. Student and teacher dashboards are independent.

Merge order: Flexible.

graph TD
    PR125[PR-12.5<br/>Grade scheme]

    subgraph "Phase 13: Parallel (2 devs)"
        PR131[PR-13.1<br/>Student dashboard ext]
        PR132[PR-13.2<br/>Teacher dashboard ext]
    end

    PR125 --> PR131
    PR125 --> PR132

    style PR131 fill:#90ee90
    style PR132 fill:#90ee90

Step 14: Quality & Hardening (Moderate Parallelization)

Parallelization level: 2 developers

Parallel tracks:

TrackPRPurpose
APR-14.1Background jobs (performance/certification)
BPR-14.2Admin reporting (integrity dashboard)

Prerequisites: Steps 11-13 complete.

Why parallel? PR-14.1 creates background jobs. PR-14.2 builds admin UI that displays job results (can use stubbed data initially).

Merge order: PR-14.1 → PR-14.2 (PR-14.2 displays job results).

graph TD
    PR132[PR-13.2<br/>Teacher dashboard ext]

    subgraph "Phase 14: Parallel (2 devs)"
        PR141[PR-14.1<br/>Background jobs]
        PR142[PR-14.2<br/>Admin reporting]
    end

    PR132 --> PR141
    PR132 --> PR142
    PR141 --> PR142

    style PR141 fill:#99ccff
    style PR142 fill:#99ccff

Parallelization Summary

Key insights:

  • High parallelization: Steps 3, 5, 8, 10, 11 (2-4 developers)
  • Moderate parallelization: Steps 9, 12, 13, 14 (2 developers)
  • Sequential bottlenecks: Steps 2, 4 (PR-4.3), 6, 7
  • Overall: With 3-4 developers, Steps 3-5 can complete in ~60% of sequential time. Steps 6-14 add similar parallelization gains.

Key insight: Steps 3, 5, 8, 10, and 11 are highly parallelizable. Steps 4, 6-7, and 12-14 have bottlenecks but allow parallelization before and after.

graph LR
    subgraph "Legend"
        SEQ[Sequential - Must be done in order]
        PAR2[Parallel - 2 developers can work together]
        PAR3[Parallel - 3 developers can work together]
        BOTTLE[Bottleneck - Blocks other work]
    end

    style SEQ fill:#ff9999
    style PAR2 fill:#99ccff
    style PAR3 fill:#90ee90
    style BOTTLE fill:#ff6666
flowchart TD
    Start([Start Implementation])

    Step2{{"Step 2: Foundations<br/>(Sequential - 1 dev)"}}
    Step3{{"Step 3: FCFS Mode<br/>(Parallel - up to 2 devs)"}}
    Step4{{"Step 4: Preference-Based<br/>(Mixed - 2 devs)"}}
    Step4b{{"PR-4.3: Solver<br/>(Bottleneck)"}}
    Step5{{"Step 5: Roster Maintenance<br/>(Parallel - up to 2 devs)"}}
    Step6{{"Step 6-7: Grading Foundations<br/>(Sequential - 1 dev)"}}
    Step8{{"Step 8: Assignment Grading<br/>(Parallel - up to 3 devs)"}}
    Step9{{"Step 9: Participation<br/>(Parallel - 2 devs)"}}
    Step10{{"Step 10: Dashboard Partial<br/>(Parallel - 2 devs)"}}
    Step11{{"Step 11: Student Performance<br/>(Parallel - up to 4 devs)"}}
    Step12{{"Step 12: Exam Registration<br/>(Mixed - 2-3 devs)"}}
    Step13{{"Step 13: Dashboard Extension<br/>(Parallel - 2 devs)"}}
    Step14{{"Step 14: Quality & Hardening<br/>(Parallel - 2 devs)"}}
    Done([Implementation Complete])

    Start --> Step2
    Step2 --> Step3
    Step3 -->|High parallelization| Step4
    Step4 -->|Phase 4a-4b| Step4b
    Step4b -->|Phase 4c| Step5
    Step5 -->|High parallelization| Step6
    Step6 --> Step8
    Step8 -->|High parallelization| Step9
    Step9 --> Step10
    Step10 -->|High parallelization| Step11
    Step11 -->|Very high parallelization| Step12
    Step12 --> Step13
    Step13 --> Step14
    Step14 --> Done

    style Step2 fill:#ff9999
    style Step3 fill:#90ee90
    style Step4 fill:#99ccff
    style Step4b fill:#ff6666
    style Step5 fill:#90ee90
    style Step6 fill:#ff9999
    style Step8 fill:#90ee90
    style Step9 fill:#99ccff
    style Step10 fill:#90ee90
    style Step11 fill:#90ee90
    style Step12 fill:#99ccff
    style Step13 fill:#90ee90
    style Step14 fill:#99ccff