This guide provides a comprehensive overview of all the blocks available in CodeGrade's AutoTest Version 2, which streamline the process of automatically grading assignments. The blocks are designed to handle a wide range of tasks, from executing test cases and validating code structure to managing outputs, controlling environments, and integrating grading rubrics.
Each section outlines the function and configuration of different blocks, offering practical insights, best practices, and real-world examples to help you maximize the effectiveness of AutoTest in your grading workflows. Whether you are setting up a simple test or a complex assignment, this guide will walk you through the various options and how to leverage them to create a smooth and efficient grading process.
1. Test Blocks
These blocks handle the actual execution of tests and grading based on program output, code structure or other types of tests.
IO Test
TheIO (Input Output) Test block is useful for validating program output against predefined expected results. Run a program and check if the output matches the expected output according to match blocks that are nested inside this block.
There are three types of match blocks available for customizing the behavior:
Full Match: Verifies that the program's output exactly matches the expected output for a given input.
Substring Match: Checks whether the program's output contains the expected output as a substring for a given input.
Regex Match: Uses a Python regular expression to check if the output matches a specific pattern for a given input.
All match blocks can be configured for case sensitivity and can either ignore or include whitespace during matching.
Code Structure Test
is a static analysis tool used to enforce coding patterns or syntax in student code. Run a Code Structure Test using Semgrep on a file to verify specific coding patterns in student submissions.
Steps for Setting Up a Code Structure Test with Semgrep
Specify the Student File: In the Student file field, specify the file to check (default is every file in the student directory).
Create Test Cases: Nest Positive Match and/or Negative Match blocks inside this block to define your test cases:
Positive Match: Passes if a specified pattern is found in the student's code.
Negative Match: Fails if a specified pattern is found in the student's code.
Craft the Rule in Semgrep Playground: Visit the to create the rule that defines the pattern to check for.
Update the Rule Template: Copy the output from the Advanced tab into the match block’s template and edit only the pattern and languages fields as shown below:
rules:
- id: untitled_rule
pattern: YOUR_PATTERN
message: Semgrep found a match
languages: [THE_PROGRAMMING_LANGUAGE]
severity: WARNING
The Simple Python Test block allows you to integrate student code into a larger Python script, providing enhanced grading and feedback capabilities beyond basic I/O testing.
This block is especially useful for assignments where students are asked to submit small code snippets rather than full programs.
How It Works
You can insert the student's code into the script using the # CG_INSERT filename.py directive. This directive allows you to seamlessly integrate the student's code into a broader script where you can set up additional tests and validation steps.
To further improve feedback, the cg_feedback_helpers module can be used. This module allows you to run the student's code in the context of a complete program and use assertions for more precise grading and feedback.
To use cg_feedback_helpers, simply install it in the AutoTest setup with the following command:
is the industry-standard unit testing framework for Python. The Pytest block enables you to run Pytest unit tests automatically on student submissions. It is particularly useful for grading assignments where students develop their own functions, as it allows you to assess function correctness and edge cases.
The Pytest block makes it easy to create multiple test cases to evaluate the functionality of different functions. This block is ideal for advanced assignments, especially those requiring full programs or handling edge cases.
You can further enhance feedback with the cg_pytest_reporter module, which provides decorators like @name, @description, and @weight to add clarity and adjust the grading for each test case. For more information, visit the .
Test results are clearly presented for easy review by students.
The Java Compile block is essential for compiling Java programs before any tests are executed. It ensures that the student's code is syntactically correct by handling compilation, displaying error traces, and providing detailed feedback if any errors occur.
Before running tests, every Java program must be compiled to detect syntax errors and other compilation issues. This block parses compiler errors and highlights the specific lines in the code where the issues occur, allowing for targeted feedback. By ensuring the program compiles successfully, it enables subsequent tests to focus on verifying functionality without the interference of compilation errors.
Junit5
The Junit5 block allows you to define and run unit tests on compiled Java files in the student directory, providing a robust framework for evaluating assignments involving functions and classes.
Ensure the test class name and test file name matches.
This block can automatically install JUnit5, compile and execute the tests, or retrieve the necessary JUnit5 JAR files for custom configurations. The test results are then presented in a clear, user-friendly format, making it easy for both instructors and students to review and understand the outcomes.
The Script block allows the execution of a bash script as part of the test process. The block will pass if the script completes successfully, indicated by an exit status code of 0. If the script encounters an error or fails to execute as expected, resulting in a non-zero exit status code, the block will fail.
The Script block is versatile and supports various tasks, such as:
File I/O validation
Environment setup and teardown
Multi-step integration tests
Checking external dependencies and process statuses
Custom Test
The Custom Test block in CodeGrade’s AutoTest system allows for flexible and custom grading through scripting. You can write your own grading logic in any language (e.g., Python, Bash), making it ideal for complex assignments that don’t fit within predefined test blocks.
Key Features and Benefits of the Custom Test Block
Flexibility in Grading:
Handle complex or multi-step evaluations, such as generating files, validating content, and performing detailed checks. Ideal for assignments requiring partial credit or advanced logic beyond predefined test blocks.
Custom Grading Logic:
Create tailored grading rules to assign partial points or grade based on custom criteria like output formatting or function efficiency. Results are returned in a simple JSON format (e.g., { "tag": "points", "points": "3/5" }).
Integration with External Tools:
Run external libraries, tools, or custom configurations (e.g., linters, data analysis, or machine learning models) directly within your grading script, giving you full control over the testing environment.
Explore an example of using the Custom Test block for C++ unit testing and code linting at Advanced C++ autograding.
Flake8
The block generates comments directly on the student's code, highlighting the target lines according to the severity of the issue. Comments can be viewed by hovering over the line numbers, providing clear and actionable feedback.
These blocks help configure the testing environment or manage specific conditions for when tests should run.
Run Only In
The Run Only In block ensures that the nested blocks are executed only within a specified environment.
Within CodeGrade, two distinct environments are available based on the submission settings:
Submission-Only Environment:
This environment is activated when students submit files through uploads or Git repository connections.
Editor Environment:
When the editor is enabled, this environment allows students to interact directly with their code.
Within editor environment, students can run tests multiple times using the Run button in the editor, with no limits on its usage.
Nest your test blocks within Run Only In submission block to require students to use a submission attempt to receive feedback. The Run Only In block can be applied to any test execution blocks.
Run If
The Run If block ensures that nested blocks are executed only when a specified condition is met.
This block allows you to control the execution of test cases based on student performance. You can select a rubric category and define a threshold percentage, which dictates whether the tests inside the block will run.
If the student's score meets the specified threshold (either greater than or equal to or below the threshold), the tests will execute.
If the condition is not met, the tests will be skipped.
This gives you flexible control over when and how tests are executed based on performance.
Allow Internet
The Allow Internet block grants internet access to nested blocks.
By default, internet access is enabled during the AutoTest Setup phase and disabled during the Tests phase.
If a test execution block requires internet access (e.g., to install packages), wrap it within the Allow Internet block to enable connectivity.
Timeout Each
The Timeout Each block applies a timeout to all nested blocks, ensuring they complete within a specified duration.
The timeout must be a non-negative duration. By default, the timeout for a step is set to 120 seconds. If you clear the input field, the step will revert to the default timeout, and any changes to the default timeout will automatically apply.
This block is useful for preventing long runtimes and protecting virtual machine resources. By setting an appropriate timeout for each block, you can avoid issues like infinite loops or inefficient operations, such as poorly optimized recursive functions or loops.
When combining I/O Test and Match steps, the timeout used for the match is the one set for the I/O Test step.
3. Feedback and Result Management Blocks
These blocks manage how the feedback, results, and configurations are presented to the student.
Connect Rubric
The Connect Rubric block links nested test blocks outcome to a specific rubric category, automating the grading process. You use this block to make your Test Blocks graded.
Test blocks within the Connect Rubric block ensure that the outcome—whether pass, fail, or partial completion—is directly tied to the chosen rubric category. This integration allows the corresponding grade to be automatically recorded and passed back to the submission, streamlining the grading workflow.
Results of test blocks executed outside Connect Rubric block will not be linked to the grade for submission.
Weight
The Weight block allows you to assign different weights to nested blocks, controlling their contribution to the overall score. This provides flexibility in adjusting the impact of each test on the final grade.
By default, the weight of a test is set to 1. Weights can be adjusted to be negative, zero, or positive, and the weights of nested blocks are multiplied together.
Example Explanation:
In the image above, the IO Test block is connected to the rubric category Result. The total score for the Result category comes from two nested Match steps.
Full Match Case #1 contributes 1 part to the total score.
Full Match Case #2 contributes 2 parts to the total score.
The weighted sum of these contributions determines the final score for the Result category, showcasing how different test cases can be given varying levels of importance within the rubric.
Hide
The Hide block allows you to conceal specific elements from students. You can choose to always hide these elements or set them to be hidden based on the deadline or lock date.
Configurable Elements to Hide:
Config
Settings of the block
Output
Output of the block
Result
Whether the block passed or failed
Behavior Across Different Test Blocks:
The impact of the Hide block varies depending on the type of test block. Below are some examples that demonstrate how different types of test blocks are affected by hiding:
Block Example
What is Hidden
Hide config block around IO test
Hides the bash command, input, and expected output
Hide output block around IO test
Hides the actual output of the program
Hide config block around Code Structure Test
Hides the YAML configuration of the semgrep rule
Hide output block around Code Structure Test
Hides the output message about the pattern found or not found
Hide config block around Pytest/JUnit5
Hides the content of the test suite file
Hide output block around Pytest/JUnit5
Hides the output message, and individual test cases are no longer shown
Hide config block around MCQs/Select All
Hides the correct answer in AutoTest
Hide output block around MCQs/Select All
Hides whether the selected option is correct or not
Hide config block around Script/Custom Test
Hides the bash commands in the block
Hide output block around Script/Custom Test
Hides the output from running the blocks
The Hide Result block prevents students from seeing the outcome of tests, including whether they passed or failed and the total points achieved. When this block is enabled, students will not see any indication (such as a green tick, red cross, or score) regardless of the test results.
This functionality ensures that sensitive or unnecessary information is hidden from students, allowing for a more focused and secure testing environment.
4. Quiz and Interactive Blocks
Blocks for creating interactive quizzes and prompts for the student to engage with in the CodeGrade editor.
Quiz
The Quiz block transforms your CodeGrade assignment into an interactive quiz, allowing students to answer questions directly in the Code Editor. This block supports various question types, making it ideal for testing both theoretical knowledge and practical coding skills.
Question Types in the Quiz Block:
Multiple Choice Question
Select All Question
Coding Question
Prompt Engineering Question (only available in our opt-in AI Beta)
Each question type can be configured to run individually, so students can check their answers without waiting for previous questions to be graded.
Multiple Choice Question
Create a Multiple Choice Question where students must select one correct answer from a set of options. You can easily add or remove answers using the buttons on the right, and mark the correct answer using the checkboxes on the left of each option.
For each answer, you can provide a hint by clicking the 💡 icon. This hint will appear when the student selects that option. You can offer simple feedback such as "Correct, well done!" or provide more detailed explanations to help the student understand why their answer is correct or incorrect. You can configure the order in which the answers will be displayed can either be random (different for each student) or just the same as you add them.
Create a Select All Question, where students must select all correct answers from a list of options. You can add or remove answer choices, and mark the correct ones with checkboxes.
A None of the above option is automatically added as the last choice. Selecting this option will deselect any other selected answers.
For each answer, you can provide two types of feedback by clicking the 💡 icon:
Hint: Displays when the student selects an incorrect answer.
Feedback when Correct: Displays when the student selects the correct answer
The Coding Question block allows you to assess students' programming skills by asking them to write code in response to a specific problem. This block supports a wide range of test scenarios, from simple code snippets to complex assignments requiring multiple steps.
In this block, students can write their answers directly in the Online Editor, and their code can be automatically tested.
When Run Single Questions is enabled and the student presses the "Check Your Answer" button for a Coding Question, only the current question and its associated tests will be executed. Tests that rely on setup outside the Coding Question block may not function as expected.
Key Features:
Customizable Question Setup: Define the question's name, student file name to create, description, and optionally provide a partially written code template for students to complete. You can also use markdown to format the question text and include external links to resources (e.g., Wikipedia).
Automated Testing: Once the student submits their code, use various test execution blocks to automatically evaluate the correctness and functionality of their solution.
The Prompt Engineering Question block asks students to solve problems by creating effective prompts for AI models or systems. In this question, students will write a prompt, and the system will execute it to generate results. These results will be evaluated based on their accuracy and alignment with the expected outcome.
Key Features:
Student Input: Students craft prompts and submit them. The output generated by the AI is saved in a separate file as specified in the block’s configuration.
Grading Based on Output: The prompt can assessed based on the AI's response, ensuring that the student’s prompt leads to the expected results.
Additional System Prompts: You can provide an additional system prompt to guide the AI’s response, enhancing the control over the output generated.
This question type is useful for assignments that involve AI interactions, such as language models or code generation tools, allowing students to demonstrate their understanding of prompt crafting for AI systems.
5. Environment Management Blocks
Blocks that control the environment and file isolation during setup and test execution.
Programming Language Setup Blocks
In the AutoTest Setup phase, CodeGrade offers predefined blocks to install and configure the required programming languages and compilers for your tests. These blocks ensure that the correct environments are set up before running any tests on each student’s submission.
Java: Installs and configures a Java environment for Java-based assignments. Multiple versions available (e.g., Java 8, Java 11, Java 17, Java 21).
GCC (GNU Compiler Collection): Installs the GCC compiler for compiling C and C++ programs. Multiple versions available (e.g., GCC 9, GCC 10, GCC 13).
Python: Installs the required version of Python for Python-based assignments. Multiple versions available (e.g., Python 3.10, Python 3.11, Python 3.12).
Clang: Installs the Clang compiler for compiling C and C++ programs, often used as an alternative to GCC. Multiple versions available (e.g., Clang 12, Clang 16).
Node.js: Installs a specified version of Node.js for JavaScript-based assignments. Multiple versions available (e.g., Node 14, Node 16, Node 18).
.NET: Installs the required version of the .NET framework for running C# or other .NET-based assignments. Multiple versions available (e.g., .NET 8.0, .NET 9.0).
Isolate
The Isolate block ensures that the blocks nested within it run in a controlled, isolated environment. Once the block completes, the original state of the file system and environment is restored, preventing any changes from affecting subsequent tests.
Example Use Cases:
Temporary Logs or Configuration Files:
If a student's program generates temporary logs or modifies configuration files, wrapping these operations in an Isolate block ensures that any changes to the file system are contained. After the block finishes, the environment is restored, eliminating any unintended side effects on later tests.
Modifying Data Files:
When a student's code modifies a data file (e.g., CSV or JSON), the Isolate block ensures that any changes are rolled back after the test. This guarantees that the file remains unchanged for other tests, preserving the integrity of the data.
6. Resource and File Management Blocks
Blocks related to uploading, managing, and handling files during tests.
Upload Files
The Upload Files block allows you to upload files to be used within your test blocks. Files are placed in the $UPLOADED_FILES directory.
You can use this block to upload scripts, data files, or test fixtures needed for your automated tests. It also allows you to create, edit, rename, or delete files directly within the block. To download a file, simply click its name and use the download button in the top-right corner of the file editor.
The $UPLOADED_FILES directory is separate from the working directory in the Test phase of AutoTest. Ensure that you move or copy uploaded files to the current working directory, or reference them using their full path to avoid issues during testing.
Output
The Output block enables you to upload files or directories to the student's AutoTest results.
Use Case:
When student submissions generate output files or directories (e.g., logs, reports), you can capture and display these files alongside the student's code in the submission overview.
How It Works:
Capture Output: Use a Script block to run a bash command that runs the student's program and generate output files.
Upload Files: The Output block uploads the captured files to Submission Overview. Ensure the filenames match the ones referenced in the code, or use a blob pattern to upload specific types of output files.
This block helps present relevant output files, such as logs or data files, in the submission review, making it easier to evaluate the student’s work.
By understanding and utilizing the diverse blocks in AutoTest V2, you can automate and customize the grading process for a wide variety of assignments. This allows for greater efficiency, flexibility, and accuracy in evaluating student submissions. From setting up programming environments to managing test execution and feedback, these blocks give you complete control over how assignments are graded. With the insights provided in this guide, you'll be well-equipped to configure, optimize, and scale your automatic grading systems to meet your specific needs.
is a widely used unit testing framework for Java that offers several advantages over basic I/O tests. It supports assertions, parameterized test cases, and advanced features like custom decorators (e.g., @displayName), which improve test readability. Additionally, JUnit5 enables weighted test cases, allowing for more granular grading.
The Flake8 block runs the Flake8 linter on Python submissions to automatically generate inline comments based on the .
is a widely-used linter that enforces coding best practices, particularly for beginners. The block requires only the file name and generates comments, highlighting lines based on severity (error, warning, info). Additionally, you can adjust the partial grade deduction for each severity level, allowing for more granular control over grading. You can also customize its configuration to include or exclude specific style rules.
is an industry-standard Java linter that enforces coding standards from various style guides. You can choose from three built-in style guides: Checkstyle's default template, Sun style guide, and Google style guide. Additionally, you can customize these guides to include or exclude specific rules. You also have the option to adjust the percentage of points deducted based on the severity of each comment (error, warning, info).
Students can fill in Quizzes only by using CodeGrade's Online Editor. For an assignment that includes quizzes, it is therefore necessary to have the Online Editor as the only allowed way of creating submissions. In this case, it is also convenient to enable the , so that students are navigated to the Editor automatically when launching an assignment.
The AI tools are currently in Beta version. Please email for more information on joining the Beta version and getting access.
If the available language setup blocks do not meet your needs, the block offers the flexibility to run custom bash commands to install and configure any environment you require for your tests.