For each step you create, you can toggle whether to hide it or not by clicking the Hide icon. By default, all steps are visible. When you hide a step, the name is still visible, but its details aren't. Also, when you're using Continuous Feedback, hidden steps won't be run before the deadline. AutoTest will automatically run all hidden steps within 30 minutes after the deadline. There are some use cases to make your steps hidden:
You may sometimes have performance heavy tests. When giving your students Continuous Feedback, you may want to hide these tests to speed up the process, as hidden steps are not run until after the deadline.
Sometimes you might want to have tests that students can see and other tests that students cannot see. For example when having a simple test suite that students can use to quickly tests their code and a deeper advanced test suite that you will use to actually grade the code. With hidden tests, you can make the deeper advanced tests hidden.
We recommend two different ways to compile students' code. Which one to use depends on the application.
If you want to use the compiled code in multiple categories, we recommend using the per-student setup script for compiling. Either use a compilation script, which you upload as a fixture, or input the compilation command directly in the input field.
If you want to stop AutoTest when the compilation fails, you can do this in the following way:
Create a compilation rubric category.
Create a new AutoTest level and add the compilation category in this level.
Use a Run Program step to check whether compilation was successful (e.g. by checking if the compiled files exist).
Save this category and create a new AutoTest level to put all your other test categories.
Set the Only execute further levels to 100%.
If you only want to use the compiled code in one category (e.g. when every category has a different program), we recommend using a Run Program step combined with a Checkpoint to compile the code.
Create a Run Program step with the compilation command.
Create a Checkpoint step right below the Run Program step and set it to 100%.
In this way, the category will stop testing if the Run Program step fails.
The final grade of an AutoTest run is not defined by the weights you set in AutoTest, but by the amount of points a rubric level in a category has that is reached by AutoTest.
To start setting the weights, first select the rubric calculation mode. Either minimum, where a rubric category item will be chosen when the lower bound is reached, or maximum, where a rubric category item will be chosen when the upper bound is reached.
You want to use maximum when students need to pass all tests in an AutoTest category, before they should get the maximum item in the rubric category.
Let's go over an example to make this more clear. This is the rubric category we want to create tests for:
Nothing works (0)
Compiling works (1)
Simple tests work (5)
Advanced tests work (10)
Percentage range to reach item
As you can see, the maximum mode is selected, as you only reach the last rubric item (Advanced tests work) with 100% of passed tests.
Stop if compilation fails
IO Test (4 substeps)
As you can see here, the compile step actually has the highest weight, but will get the student the least amount of points. This is due to the fact that you need a weight of 8 to get 50% in the rubric category, which in turn will get you the Compiling works item.
Both the simple tests and advanced tests have a weight of 4, which is both 25% of the total amount of points achievable, which will make sure the right rubric item is filled in.
You can override the grade at all times by changing it in the grade input field. If you rerun AutoTest, this overridden grade is preserved. If you only want to adjust the grade down, you can also use a rubric category with negative weights (so one item in the category with 0 points, and all the other items with less than 0 points).
Installing packages and third-party software can be done easily using the global setup script. Either upload a bash script with installation commands which you upload as a fixture, or input it directly in the input field. You can install Ubuntu packages with
sudo apt-get install -y PACKAGE_NAME.
You can assess style and structure by using a linter. Use the "Code Quality" AutoTest step and choose a linter to run it on the code submitted by students. This test will calculate its score based on the amount of comments the linter generated. It is even possible to configure the penalties based on the severity of the comment.
You can use a unit testing framework by using one of the wrapper scripts that we provide or by writing your own. The wrapper scripts write their results to a file that is read by CodeGrade to get any output, error messages, and the final score.
Using an existing grading script in CodeGrade is straightforward, just slightly modify the script so that it outputs a value between zero and one at the end, upload it as a fixture and use a Capture Points test to execute the grading script and capture the score.
This is easily achieved by splitting your rubrics into multiple categories, one category for the automated tests and one category for the manual tests. Then, AutoTest will fill in the automatic category and you can fill in the manual category yourself. This also has the advantage of a clear separation to your students, making it easier for them to see which part is assessed automatically and which part is assessed manually.
Firstly, you can hide your fixtures in the User Interface. By default, fixtures are hidden when you upload them. You can change the state by clicking the Hide icon.
However, this still means the code of students will be able to access these fixtures on the AutoTest servers. You can limit this by using a special script. You can read more about this here.
Sometimes students might output numbers in a different format, or use a different type of rounding. CodeGrade supplies a
normalize_floats program in AutoTest to solve this issue. You can use this in the following way:
normalize_floats amount_of_decimals program_to_run.
IO tests fail by default if the exit code of the program is not 0. Sometimes, however, you want IO tests to also pass with another exit code than 0. You can simply fix this by appending
|| true to your command, this will make sure the exit code is always 0.
It may be desirable to inspect files that are generated during the run of an AutoTest, such as compiled objects or IPython notebooks. By default generated files are not saved, but they will be when you write them to the
$AT_OUTPUT directory. The files will then be accessible through the "Autotest output" section of the file browser in the Code Viewer.