Took me awhile to figure this out. One more thing for the list of things that Jenkins makes unnecessarily complex.
The Problem
We have a pipeline job that starts a handful of other jobs - call them subjobs - and we want to aggregate the test results of the subjobs in our pipeline.
The required list of extra plugins you need
To get this done I had to add the copyartifact plugin
You will also need one of JUnit/NUnit/xUnit depending on what your reports look like.
How to do it
Use archiveArtifact in the subjob to archive the test results. Somehting like
archiveArtifacts artifacts: 'results\\*.xml', fingerprint: true
Then in the job you do the aggregation make sure to keep the references to the subjobs.
From the reference we then get the build number for each of the test jobs which we need in order to find the correct artifacts to copy.
def test_jobs=[:]
def subjobs=[:]
for(int i = 0; i < test_suites.size(); i++){
def instanceNumber=i
subjobs["${env.BUILD_DISPLAY_NAME}-${instanceNumber}"]={
tj=build job: 'Test'
test_jobs<<tj.getNumber()
}
}
parallel subjobs
Another tricky thing that has to do with how Groovy scopes things:
If i
is not assigned to a variable within the closure then you will only get one job in the subjobs map.
Then, once the subjobs are finished we pick a node to have a workspace and grab the test reports.
node(){
for(int i = 0; i < test_jobs.size(); i++){
selector = [$class: 'SpecificBuildSelector',
buildNumber: "${test_jobs[i]}"]
step ([$class: 'CopyArtifact',
projectName: 'Test',
selector: selector])
}
//Your test results plugin here
}
That selector trick I found in the blueocean sources (don’t even ask me how I arrived there)