©️Advanced C Autograding

Discover the advanced autograding options for C assignments

In this guide, we explore the advanced grading options available for C assignments. For more information about setting up a C assignment from scratch, see:

©️Create your first C assignment

Check Unit Tests

Check is an industry-standard unit testing framework for C. It is particularly useful for grading assignments that require students to write functions and classes. Unit testing with check offers several advantages over conventional IO testing including the ability to use assertions, parametrize test cases, and provide better feedback for students. Consider the example submission below:

bubble_sort.h
#ifndef BUBBLESORT_H
#define BUBBLESORT_H

void bubbleSort(int arr[], int n);

#endif // BUBBLESORT_H
bubble_sort.c
#include "bubble_sort.h"

void bubble_sort(int arr[], int n) {
    int i, j, temp;
    for (i = 0; i < n-1; i++) {
        for (j = 0; j < n-i-1; j++) {
            if (arr[j] > arr[j+1]) {
                // Swap arr[j] and arr[j+1]
                temp = arr[j];
                arr[j] = arr[j+1];
                arr[j+1] = temp;
            }
        }
    }
}

In this example, students have been tasked with creating the bubble_sort() function. We can create robust test cases to check this submission using assertions from the check unit testing framework.

unit_test.c
#include <check.h>
#include "bubbleSort.h"

START_TEST(test_bubbleSort_basic) {
    int arr[] = {5, 3, 8, 4, 2};
    int expected[] = {2, 3, 4, 5, 8};
    int n = 5;

    bubbleSort(arr, n);

    for (int i = 0; i < n; i++) {
        ck_assert_int_eq(arr[i], expected[i]);
    }
}
END_TEST

START_TEST(test_bubbleSort_sorted) {
    int arr[] = {1, 2, 3, 4, 5};
    int expected[] = {1, 2, 3, 4, 5};
    int n = 5;

    bubbleSort(arr, n);

    for (int i = 0; i < n; i++) {
        ck_assert_int_eq(arr[i], expected[i]);
    }
}
END_TEST

START_TEST(test_bubbleSort_reverse) {
    int arr[] = {9, 7, 5, 3, 1};
    int expected[] = {1, 3, 5, 7, 9};
    int n = 5;

    bubbleSort(arr, n);

    for (int i = 0; i < n; i++) {
        ck_assert_int_eq(arr[i], expected[i]);
    }
}
END_TEST

START_TEST(test_bubbleSort_duplicates) {
    int arr[] = {4, 1, 3, 4, 2};
    int expected[] = {1, 2, 3, 4, 4};
    int n = 5;

    bubbleSort(arr, n);

    for (int i = 0; i < n; i++) {
        ck_assert_int_eq(arr[i], expected[i]);
    }
}
END_TEST

Suite* bubbleSort_suite(void) {
    Suite *s;
    TCase *tc_core;

    s = suite_create("BubbleSort");

    /* Core test case */
    tc_core = tcase_create("Core");

    tcase_add_test(tc_core, test_bubbleSort_basic);
    tcase_add_test(tc_core, test_bubbleSort_sorted);
    tcase_add_test(tc_core, test_bubbleSort_reverse);
    tcase_add_test(tc_core, test_bubbleSort_duplicates);
    suite_add_tcase(s, tc_core);

    return s;
}

int main(void) {
    int number_failed;
    Suite *s;
    SRunner *sr;

    s = bubbleSort_suite();
    sr = srunner_create(s);

    srunner_set_xml(sr, "report.xml");
    srunner_run_all(sr, CK_NORMAL);
    number_failed = srunner_ntests_failed(sr);
    srunner_free(sr);
    return (number_failed == 0) ? 0 : 1;
}

"cg junitxml" command

We could simply compile and run the test cases to show students a simple pass or fail depending on the exit code of the tests. However, we wouldn't be able to award partial marks for each test case unless we parsed the results of the unit tests. To accomplish that, we can use the cg junitxml command. This command allows us to parse the number of test cases passed and the feedback from failed test cases from any unit test report written in the JUnit XML format.

Running the command cg junitxml --help shows the following information:

This is a parser for generic unit test coverage reports in JUnit XML format.

The parser can take in input multiple JUnit XML files. The final reported score
is an aggregate of all the reports. Skipped test cases do not count towards the
score.

Example use:
    Input to the command (in a file called coverage.xml):
<?xml version="1.0" encoding="UTF-8" ?>
<testsuites id="Calculator suite" name="empty_name" tests="2" failures="1" time="0.001">
  <testsuite id="uuid1" name="Addition and Multiplication" tests="2" failures="1" time="0.001">
    <testcase id="uuid2" name="[1] Check whether the addition function returns the expected result." time="0.001"></testcase>
    <testcase id="uuid2" name="[2] Check whether the multiply function returns the expected result." time="0.001">
      <failure message="expected: 6.0 but was: 5.0" type="ERROR">
ERROR: Expected: 6.0 but was: 5.0
Category: Checking returns - Multiplication
File: /home/codegrade/student/calculator.py
Line: 2
      </failure>
    </testcase>
  </testsuite>
</testsuites>

    You would then call:
cg junitxml coverage.xml

Usage:
  cg junitxml XMLS... [flags]

Flags:
  -h, --help              help for junitxml
      --no-parse-weight   Don't try to parse the weight from titles.
      --no-score          Don't calculate and output the final score based on the results of the input data.
  -o, --output string     The output file to to use. If this is a number it will use that file descriptor. (default "3")

The cg junitxml command works well in combination with the Custom Test block by displaying the results of the parsed JUnit XML report beautifully, making it easy to read and interpret, as shown in the image below.

Unfortunately, the check framework doesn't have a method for automatically outputting JUnit XML. Instead we can use the srunner_set_xml() command to generate an XML report (in check's own format) and use the following script to convert that into the JUnit XML format.

junit_xml.py
#!/usr/bin/env python3
# pylint: disable=missing-module-docstring,missing-function-docstring

import os
import sys
import typing as t
import subprocess
import xml.etree.ElementTree as ET



def _find_and_get_text(el: ET.Element, to_find: str) -> str:
    found = el.find(to_find)
    assert found is not None, (
        'Expected to find %s in %s, but could not find it' % (to_find, el.tag)
    )
    return found.text or ''


def parse_case(el: ET.Element) -> ET.Element:
    case = ET.Element('testcase')
    case.set('classname', _find_and_get_text(el, 'description'))
    case.set('name', _find_and_get_text(el, 'id'))
    # Sometimes the 'duration' attr is -1.
    case.set('time', str(max(0, float(_find_and_get_text(el, 'duration')))))

    state = el.get('result')
    if state is not None and state != 'success':
        message = ET.Element(state)
        message.text = _find_and_get_text(el, 'message')
        case.append(message)

    return case


def parse_suite(el: ET.Element) -> ET.Element:
    cases = [parse_case(c) for c in el.findall('test')]

    suite = ET.Element('testsuite')
    suite.set('name', _find_and_get_text(el, 'title'))
    suite.set('tests', str(len(cases)))
    suite.set(
        'failures',
        str(sum(1 for c in cases if c.find('failure') is not None))
    )
    suite.set(
        'errors', str(sum(1 for c in cases if c.find('error') is not None))
    )
    suite.set('time', str(sum(float(c.get('time') or '0') for c in cases)))
    suite.extend(cases)

    return suite


def parse_tree(el: ET.Element) -> ET.ElementTree:
    assert el.tag == 'testsuites'

    suites = [parse_suite(s) for s in el.findall('suite')]

    tree = ET.Element('testsuites')
    tree.set('time', _find_and_get_text(el, 'duration'))
    tree.set('tests', str(sum(int(s.get('tests') or '0') for s in suites)))
    tree.set(
        'failures', str(sum(int(s.get('failures') or '0') for s in suites))
    )
    tree.set('errors', str(sum(int(s.get('errors') or '0') for s in suites)))
    tree.extend(suites)

    return ET.ElementTree(tree)


def run(
    test: t.List[str]
) -> t.NoReturn:
    """Run a Check suite and parse its results
    """
    status = subprocess.run(test)

    # Remove namespaces in tag names.
    tree = ET.iterparse("report.xml")
    for _, el in tree:
        _, _, el.tag = el.tag.rpartition('}')

    root = tree.root  # type: ignore
    parse_tree(root).write("report.xml")
    sys.exit(status)

if __name__ == '__main__':
    run(sys.argv[1:])

Instructions

  1. In the AutoTest settings, navigate to the Setup tab.

  2. Add an Install GCC block to your setup configuration.

  3. Add a Script block to your setup configuration and install check with the following command.

    sudo apt install check
  4. In the AutoTest settings, navigate to the Tests tab.

  5. Add an Upload Files block. Upload bubble_sort.h, unit_test.c, and junit_xml.py.

  6. Add a Script block. Move all the uploaded files to the student's directory and compile bubble_sort.c with the following commands.

    mv $UPLOADED_FILES/* .
    gcc -o unit_test unit_test.c bubbleSort.c -lcheck -lm -lpthread -lrt -lsubunit
  7. Add a Connect Rubric block and a Custom Test block. Nest the Custom Test block within the Connect Rubric block. Run the unit tests and parse the results with the following commands.

    python3 check.py ./unit_test
    cg junitxml junit.xml
  8. Build and publish your snapshot.

Clang-tidy

Clang-tidy is an industry-standard static analysis tool for C, C++, and Objective-C that allows you to diagnose and fix code style violations, interface misuse, and bugs. It is a useful tool for enforcing code styling best practices for beginner programmers.

"cg comments" command

Simply running clang-tidy would allow us to produce the default command line output. However, this wouldn't be particularly useful for students as they would have to spend time interpreting the output and would have to switch back and forth between the AutoTest output and their code. Instead, we can use the cg comments command to parse the output of clang-tidy and write the comments directly onto our students' code. The cg comments command will highlight each target line according to the severity of the comment and can be read by hovering over the line number with the mouse cursor. The comments are also placed on students' code in the editor, making it a powerful combination.

Instructions

  1. In the AutoTest settings, navigate to the Setup tab.

  2. Add a Script block. Install clang-tidy using the following command.

    sudo apt update
    sudo apt install clang-tidy
  3. In the AutoTest settings, navigate to the Tests tab.

  4. Add a Connect Rubric block and a Custom Test block. Nest the Custom Test block in the Connect Rubric block. Run clang-tidy and parse the output using the following commands.

    clang-tidy bubble_sort.c --quiet --checks=* -- | cg comments \
        '^(?P<file>[^:]*):(?P<line>[\d]+):(?P<column>[\d]+): (?P<severity>\s*\S*): (?P<message>.*)$' \
         --origin clang-tidy \
         --ignore-regex '^\s*$|^[^/].*$'
  5. Build and Publish your snapshot.

Last updated