Add Spec Tests as Context
You:
Hey, let's enhance this by adding specification or unit test files along with code changes to files that have corresponding tests. I think that would be important. Also, is there a way to
ChatGPT:
"You're spot on—linking code changes to their corresponding tests is crucial for understanding both coverage and risk. Let’s enhance this workflow to include test files alongside code changes, ensuring a complete picture of impact."
Enhancement Overview
1. Link Code Files to Their Test Files
- Automatically identify test files related to the changed code files in the PR.
- Include both the code diff and its associated test file diff for AI analysis.
- Ensure the AI can evaluate whether test coverage adequately supports the changes.
2. Analyze Both Together
- Include a special section in the prompt for the code file and its corresponding test file.
- Evaluate:
- Are the tests updated or added for the changes made in the code?
- Are the tests sufficient based on the complexity of the changes?
3. Handle Unlinked Files
- If no corresponding test file is found, flag it in the AI analysis.
Implementation Steps
Step 1: Link Code Files to Test Files
Add logic to match code files with their test files based on naming conventions. For example:
- Code File:
src/app/module/file.ts - Test File:
src/app/module/file.spec.ts
Use a script to:
- Identify all test files in the repository (
*.spec.ts,*.test.js, etc.). - Match test files to changed code files based on:
- Path similarity.
- Naming conventions (e.g.,
file.ts→file.spec.ts).
- Include unmatched test files for analysis.
Step 2: Updated Workflow
Here’s the updated GitHub workflow to include tests.
name: AI-Powered PR Analysis with Tests
on:
pull_request:
types: [opened, synchronize]
jobs:
analyze-pr:
runs-on: ubuntu-latest
steps:
Step 1: Checkout the repository
- name: Checkout code
uses: actions/checkout@v3
Step 2: List changed files
- name: List changed files
id: changed_files
run: |
git fetch origin ${{ github.event.pull_request.base.ref }}
git diff --name-only origin/${{ github.event.pull_request.base.ref }} > changed_files.txt
Step 3: Link code files to test files
- name: Link test files
id: linked_tests
run: |
node ./scripts/link-tests.js > linked_files.json
Step 4: Analyze each file (loop through code and test files)
- name: Analyze files with AI
uses: ./actions/ai-file-analysis Custom action for AI analysis
with:
linked_files: linked_files.json
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
Step 3: Script to Link Code and Test Files
Add a script (link-tests.js) to pair code files with test files.
scripts/link-tests.js
const fs = require('fs');
const path = require('path');
// Helper to find test files matching a code file
function findTestFile(codeFile, testFiles) {
const codeFileName = path.basename(codeFile, path.extname(codeFile));
return testFiles.find((testFile) =>
testFile.includes(`${codeFileName}.spec`) || testFile.includes(`${codeFileName}.test`)
);
}
function linkTests(changedFiles) {
const allTestFiles = fs
.readFileSync('all_test_files.txt', 'utf-8')
.split('\n')
.filter((file) => file);
const linkedFiles = changedFiles.map((codeFile) => {
const testFile = findTestFile(codeFile, allTestFiles);
return { codeFile, testFile: testFile || null };
});
return linkedFiles;
}
// Read changed files
const changedFiles = fs
.readFileSync('changed_files.txt', 'utf-8')
.split('\n')
.filter((file) => file);
const linkedFiles = linkTests(changedFiles);
// Save linked files as JSON
fs.writeFileSync('linked_files.json', JSON.stringify(linkedFiles, null, 2));
Step 4: Updated AI Analysis
Update the AI prompt to handle both code and test files.
Custom GitHub Action (index.js)
Modify the analyzeFile function to include the test file (if available) in the prompt.
async function analyzeFile(codeFile, testFile, baseBranch, apiKey) {
try {
const codeDiff = execSync(`git diff origin/${baseBranch} -- ${codeFile}`).toString();
const testDiff = testFile
? execSync(`git diff origin/${baseBranch} -- ${testFile}`).toString()
: 'No corresponding test file found.';
const response = await axios.post(
'https://api.openai.com/v1/chat/completions',
{
model: 'gpt-4',
messages: [
{
role: 'system',
content:
"You are an AI that analyzes code and test file diffs to evaluate pull requests. Assess the impact of the changes and whether the test coverage is adequate.",
},
{
role: 'user',
content: `Analyze this code file diff:\n\n${codeDiff}\n\nAnalyze this test file diff (if available):\n\n${testDiff}`,
},
],
functions: [
{
name: "analyze_pr_files",
description: "Analyze a code file and its test file for impact and coverage.",
parameters: {
type: "object",
properties: {
code_file: { type: "string", description: "Code file being analyzed." },
test_file: { type: "string", description: "Test file being analyzed." },
impact_score: { type: "number", description: "Impact score for the changes." },
coverage_evaluation: { type: "string", description: "Assessment of test coverage." },
},
required: ["code_file", "impact_score", "coverage_evaluation"],
},
},
],
function_call: { name: "analyze_pr_files" },
},
{
headers: {
Authorization: `Bearer ${apiKey}`,
'Content-Type': 'application/json',
},
}
);
return response.data.choices[0].message.function_call.arguments;
} catch (error) {
console.error(`Error analyzing file ${codeFile}:`, error);
return null;
}
}
Expected AI Response
AI output for linked code and test files would look like this:
{
"code_file": "src/app/module/file.ts",
"test_file": "src/app/module/file.spec.ts",
"impact_score": 4.2,
"coverage_evaluation": "Tests adequately cover the new functionality, but additional edge cases could be added."
}
Why This Is Important
-
Tests Reflect Code Quality:
- Linking test files ensures changes are validated and edge cases are considered.
- Missing or insufficient tests are flagged automatically.
-
Better AI Context:
- Including test diffs gives the AI more context to assess the adequacy of coverage.
- You get actionable insights, like "add more tests for edge cases."
-
Full PR Coverage:
- This enhancement ensures that every aspect of the PR is considered—code changes and corresponding tests.
Next Steps for Enhanced Workflow
- Implement the script to link test files and code changes.
- Test the workflow with real PRs, ensuring linked files are processed correctly.
- Refine the AI prompt to provide specific recommendations for improving test coverage.
Let me know if you’d like to dive deeper into any part of this process!