Workflow Overview
You:
Hey Let's implement Five Let's start with. Chapter five Implementation. Can we see what the get hub action? Workflow Looks like here.
ChatGPT:
"Alright, let’s dive into Chapter Five—scrappy AI for developers, straight from the hustle playbook. This GitHub Action workflow will be lean, mean, and AI-powered. Let’s make this happen."
Here’s the plan for implementing Chapter Five: Scrappy AI for Developers. We’ll build a GitHub Action workflow that uses AI to analyze pull requests (PRs) and assign an impact score based on:
- Code size and complexity.
- Test coverage changes.
- Files affected (core vs. auxiliary files).
- Potential risks, like introducing dependencies or modifying critical files.
GitHub Action Workflow: AI-Powered PR Analysis
Workflow Design Goals
- Automate PR analysis with AI.
- Output an impact score as part of the PR checks.
- Keep it lightweight and entirely GitHub-hosted (no backend needed).
Example Workflow File (ai-pr-analysis.yml)
name: AI-Powered PR Analysis
on:
pull_request:
types: [opened, synchronize]
jobs:
analyze-pr:
runs-on: ubuntu-latest
steps:
Step 1: Check out the repository
- name: Checkout code
uses: actions/checkout@v3
Step 2: Fetch PR details and generate diff
- name: Generate PR diff
id: diff
run: |
git fetch origin ${{ github.event.pull_request.base.ref }}
git diff origin/${{ github.event.pull_request.base.ref }} > pr_diff.txt
echo "diff-path=pr_diff.txt" >> $GITHUB_ENV
Step 3: Run AI model to analyze the PR
- name: Analyze PR with AI
id: ai-analysis
run: |
Install Node.js dependencies (e.g., OpenAI API or Hugging Face)
npm install axios
Call the AI API (example: OpenAI or custom model)
node <<EOF
const fs = require('fs');
const axios = require('axios');
const diff = fs.readFileSync('pr_diff.txt', 'utf8');
async function analyzePR() {
const response = await axios.post('https://api.openai.com/v1/completions', {
model: 'text-davinci-003',
prompt: \`Analyze this PR diff and assign an impact score based on size, complexity, and risk:\n\${diff}\`,
max_tokens: 100,
}, {
headers: {
'Authorization': 'Bearer YOUR_OPENAI_API_KEY',
'Content-Type': 'application/json',
},
});
console.log(response.data.choices[0].text);
fs.writeFileSync('impact_score.txt', response.data.choices[0].text);
}
analyzePR().catch(console.error);
EOF
Step 4: Output results as a comment on the PR
- name: Post Impact Score Comment
uses: actions/github-script@v6
with:
script: |
const fs = require('fs');
const impactScore = fs.readFileSync('impact_score.txt', 'utf8');
await github.rest.issues.createComment({
issue_number: context.payload.pull_request.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: `## AI Impact Analysis\n\nImpact Score: ${impactScore}\n\n_This analysis was generated automatically using AI._`,
});
How It Works
- Trigger: The workflow runs on PR creation or updates (
pull_requestevent). - Step 1: Checkout Repository: It pulls the repository and fetches the base branch for a clean diff.
- Step 2: Generate Diff: The workflow generates a diff file (
pr_diff.txt) capturing the changes in the PR. - Step 3: AI Analysis: The AI model (e.g., OpenAI GPT) is invoked using Node.js to analyze the diff and return an impact score.
- Step 4: Post Results: The AI-generated analysis is posted as a comment on the PR for developers to review.
Benefits of This Workflow
- Lightweight: No need for a database or backend—just GitHub Actions and external APIs.
- Automated Insights: Developers get immediate feedback on the potential risks and impact of their changes.
- Customizable: You can tweak the AI model’s prompt to analyze specific aspects, like security or dependency risks.
Next Steps
- Replace
YOUR_OPENAI_API_KEY: Add your OpenAI API key or replace it with a different AI service. - Extend the Prompt: Adjust the AI prompt to fit your team’s needs (e.g., focus on test coverage or performance risks).
- Integrate with Nx Executors: If you’re using Nx, we can create an executor that calls this GitHub workflow to streamline the process further.
Want to dive deeper into tweaking this workflow or explore connecting it with custom Nx executors? Let me know!
You:
I think the workflow action Should define The job and workflow steps. The implementation details should be implemented. In a custom Github action. The custom. Get up. Action. Should include And NX executor. That makes the call. The open Ai api Can we do this?
ChatGPT:
"Alright, let’s level this up—your GitHub workflow should orchestrate the steps, while a custom GitHub Action and Nx executor handle the heavy lifting. This is how you turn scrappy into sophisticated without the enterprise fluff."
Here’s how we’ll structure it:
- GitHub Workflow (Action YAML): Defines the high-level flow and invokes the custom GitHub Action.
- Custom GitHub Action: Handles the execution logic, including invoking the Nx executor and making the OpenAI API call.
- Nx Executor: Encapsulates the API interaction and processing logic so it’s reusable and extendable.
Step 1: GitHub Workflow File
This workflow delegates the core logic to your custom GitHub Action.
name: AI-Powered PR Analysis
on:
pull_request:
types: [opened, synchronize]
jobs:
analyze-pr:
runs-on: ubuntu-latest
steps:
Step 1: Checkout the repository
- name: Checkout code
uses: actions/checkout@v3
Step 2: Invoke the custom action for AI analysis
- name: Run AI Analysis
uses: ./actions/ai-pr-analysis Path to your custom GitHub Action
with:
pr_number: ${{ github.event.pull_request.number }}
repo_owner: ${{ github.repository_owner }}
repo_name: ${{ github.event.repository.name }}
base_branch: ${{ github.event.pull_request.base.ref }}
openai_api_key: ${{ secrets.OPENAI_API_KEY }} Add this in your GitHub secrets
Step 2: Custom GitHub Action
The custom GitHub Action contains the logic to:
- Fetch the PR diff.
- Call the Nx executor.
- Post the AI results back to the PR.
action.yml
name: AI-Powered PR Analysis
description: Analyze pull requests using AI for impact scoring.
inputs:
pr_number:
description: The PR number to analyze.
required: true
repo_owner:
description: The owner of the repository.
required: true
repo_name:
description: The name of the repository.
required: true
base_branch:
description: The base branch of the PR.
required: true
openai_api_key:
description: The OpenAI API key for analysis.
required: true
runs:
using: "node16"
main: "index.js"
index.js
const { exec } = require('child_process');
const core = require('@actions/core');
const github = require('@actions/github');
async function run() {
try {
const prNumber = core.getInput('pr_number');
const repoOwner = core.getInput('repo_owner');
const repoName = core.getInput('repo_name');
const baseBranch = core.getInput('base_branch');
const openAiApiKey = core.getInput('openai_api_key');
// Run Nx executor for analysis
exec(
`nx ai-pr-analyze --prNumber=${prNumber} --repoOwner=${repoOwner} --repoName=${repoName} --baseBranch=${baseBranch} --openAiApiKey=${openAiApiKey}`,
(error, stdout, stderr) => {
if (error) {
core.setFailed(`Nx executor failed: ${error.message}`);
return;
}
// Log and set output
console.log(stdout);
core.setOutput('result', stdout);
}
);
} catch (error) {
core.setFailed(error.message);
}
}
run();
Step 3: Nx Executor
The Nx executor encapsulates the AI logic, making it reusable across projects.
executors/ai-pr-analyze/executor.ts
import { ExecutorContext } from '@nrwl/devkit';
import axios from 'axios';
import fs from 'fs';
import * as util from 'util';
import { exec } from 'child_process';
const execPromise = util.promisify(exec);
export interface AiPrAnalyzeOptions {
prNumber: string;
repoOwner: string;
repoName: string;
baseBranch: string;
openAiApiKey: string;
}
export default async function runExecutor(
options: AiPrAnalyzeOptions,
context: ExecutorContext
) {
const diffFilePath = './tmp/pr_diff.txt';
try {
// Generate the PR diff
await execPromise(
`git fetch origin ${options.baseBranch} && git diff origin/${options.baseBranch} > ${diffFilePath}`
);
const diff = fs.readFileSync(diffFilePath, 'utf-8');
// Call OpenAI API for analysis
const response = await axios.post(
'https://api.openai.com/v1/completions',
{
model: 'text-davinci-003',
prompt: `Analyze this PR diff and assign an impact score based on size, complexity, and risk:\n${diff}`,
max_tokens: 100,
},
{
headers: {
Authorization: `Bearer ${options.openAiApiKey}`,
'Content-Type': 'application/json',
},
}
);
const impactScore = response.data.choices[0].text.trim();
console.log(`Impact Score: ${impactScore}`);
// Post results to GitHub
const octokit = github.getOctokit(options.openAiApiKey);
await octokit.rest.issues.createComment({
issue_number: parseInt(options.prNumber, 10),
owner: options.repoOwner,
repo: options.repoName,
body: `## AI Impact Analysis\n\nImpact Score: ${impactScore}\n\n_This analysis was generated automatically using AI._`,
});
return { success: true };
} catch (error) {
console.error('Error:', error);
return { success: false };
}
}
Key Benefits
-
Separation of Concerns:
- The GitHub workflow defines the flow.
- The custom action handles orchestration.
- The Nx executor encapsulates the AI logic.
-
Reusability:
- The Nx executor can be reused for local analysis or integrated into other workflows.
-
Flexibility:
- You can enhance the executor logic (e.g., add more metrics or switch AI models) without changing the workflow.
Next Steps for Setup
- Set up the custom GitHub Action and Nx executor in your repo.
- Add
OPENAI_API_KEYas a secret in your GitHub repository. - Test the workflow and refine the AI prompt to suit your team’s needs.
Let me know how deep you want to go on this—refining the Nx executor or enhancing the GitHub Action is just a message away!
You:
- Let's update our working prototype so that the open AI It's retrieved from a secret or environment variable. 2. Let's also assume that The annex executor is implemented in a package that is retrieved from the MPM registry. Using a scope name. And name of the package. New paragraph I would also like the executor to create a output variable for the response so that it could be returned and used by the Github workflow. Hope this makes sense.
ChatGPT:
"Alright, let’s make this prototype leaner and meaner. OpenAI keys securely handled, scoped Nx executors from npm, and output variables ready for GitHub workflows to do their thing. Let’s get it done."
Here’s how we’ll update the implementation:
Updates Overview
- Retrieve OpenAI API Key Securely:
- Use an environment variable or GitHub secret for the API key.
- Ensure the Nx executor reads the key from the environment or workflow input.
- Use Scoped Nx Executor Package:
- Assume the executor is available via an npm package (
@your-scope/nx-ai-executor). - Include it in the project and call it within the workflow.
- Assume the executor is available via an npm package (
- Output Variable for AI Response:
- The Nx executor will return the AI response as an output variable.
- This will allow GitHub workflows to use the response downstream.
Updated GitHub Workflow
name: AI-Powered PR Analysis
on:
pull_request:
types: [opened, synchronize]
jobs:
analyze-pr:
runs-on: ubuntu-latest
steps:
Step 1: Checkout the repository
- name: Checkout code
uses: actions/checkout@v3
Step 2: Set up Node.js environment
- name: Set up Node.js
uses: actions/setup-node@v3
with:
node-version: 16
Step 3: Install dependencies
- name: Install dependencies
run: npm install
Step 4: Run the Nx executor for AI analysis
- name: Run AI PR Analysis
id: ai-analysis
run: npx nx ai-pr-analyze --prNumber=${{ github.event.pull_request.number }} --repoOwner=${{ github.repository_owner }} --repoName=${{ github.event.repository.name }} --baseBranch=${{ github.event.pull_request.base.ref }}
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
Step 5: Use AI response in subsequent steps
- name: Output AI Impact Score
run: echo "AI Impact Score: ${{ steps.ai-analysis.outputs.aiResponse }}"
Nx Executor Package
executor.ts
This executor is now part of the npm package (e.g., @your-scope/nx-ai-executor).
import { ExecutorContext } from '@nrwl/devkit';
import axios from 'axios';
import fs from 'fs';
import * as util from 'util';
import { exec } from 'child_process';
const execPromise = util.promisify(exec);
export interface AiPrAnalyzeOptions {
prNumber: string;
repoOwner: string;
repoName: string;
baseBranch: string;
}
export default async function runExecutor(
options: AiPrAnalyzeOptions,
context: ExecutorContext
): Promise<{ success: boolean; aiResponse?: string }> {
const diffFilePath = './tmp/pr_diff.txt';
try {
const openAiApiKey = process.env.OPENAI_API_KEY;
if (!openAiApiKey) {
throw new Error('OPENAI_API_KEY environment variable is not set.');
}
// Generate the PR diff
await execPromise(
`git fetch origin ${options.baseBranch} && git diff origin/${options.baseBranch} > ${diffFilePath}`
);
const diff = fs.readFileSync(diffFilePath, 'utf-8');
// Call OpenAI API for analysis
const response = await axios.post(
'https://api.openai.com/v1/completions',
{
model: 'text-davinci-003',
prompt: `Analyze this PR diff and assign an impact score based on size, complexity, and risk:\n${diff}`,
max_tokens: 100,
},
{
headers: {
Authorization: `Bearer ${openAiApiKey}`,
'Content-Type': 'application/json',
},
}
);
const impactScore = response.data.choices[0].text.trim();
console.log(`Impact Score: ${impactScore}`);
return {
success: true,
aiResponse: impactScore,
};
} catch (error) {
console.error('Error during AI analysis:', error);
return { success: false };
}
}
Package Structure for npm
Here’s what the scoped package (@your-scope/nx-ai-executor) should include:
@your-scope/nx-ai-executor/
├── package.json
├── README.md
├── src/
│ ├── executors/
│ │ ├── ai-pr-analyze/
│ │ │ ├── executor.ts
├── dist/ (after build)
package.json
{
"name": "@your-scope/nx-ai-executor",
"version": "1.0.0",
"description": "Nx executor for AI-powered pull request analysis",
"main": "./dist/index.js",
"scripts": {
"build": "tsc -p tsconfig.json",
"lint": "eslint . --ext .ts"
},
"dependencies": {
"axios": "^1.3.0",
"fs": "0.0.1-security",
"@nrwl/devkit": "^14.7.0"
},
"devDependencies": {
"typescript": "^4.5.5",
"eslint": "^8.10.0"
},
"keywords": ["nx", "executor", "ai", "openai"],
"license": "MIT"
}
Updates to Executor for Output Variables
The Nx executor now returns the aiResponse variable, which GitHub workflows can consume.
GitHub Workflow Step Update
- name: Run AI PR Analysis
id: ai-analysis
run: npx nx ai-pr-analyze --prNumber=${{ github.event.pull_request.number }} --repoOwner=${{ github.repository_owner }} --repoName=${{ github.event.repository.name }} --baseBranch=${{ github.event.pull_request.base.ref }}
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
- name: Output AI Impact Score
run: echo "AI Impact Score: ${{ steps.ai-analysis.outputs.aiResponse }}"
Executor Update
return {
success: true,
aiResponse: impactScore, // This becomes the output variable
};
How It All Ties Together
- The GitHub Workflow orchestrates the process and securely passes the OpenAI key via
env. - The Custom Nx Executor is part of a scoped npm package, enabling easy reuse across projects.
- The Executor Outputs the AI response, making it available for downstream steps in the workflow.
Let me know if you want to publish the npm package or refine this implementation further!
You:
I have a question about the Github workflow and actions. I do like it better when the Github workflow uses an action. To abstract the call to the NX executor. So if we could revert to that. That would be great. My question is if the executor Has a dependency on additional packages from the NPM Registry, How does the Github workflow? How to install or use those? Dependency packages? Do I have to do anything different? How does it run in that context when there's a dependency?
ChatGPT:
"Alright, let’s streamline this! Abstracting the Nx executor call into a GitHub Action is a clean move—and I’ll clear up the mystery of dependencies in the workflow context."
Here’s the breakdown of how GitHub workflows handle dependencies when running a custom GitHub Action that calls an Nx executor.
Reverting to GitHub Action with Nx Executor Abstraction
This is a smarter structure because it separates concerns:
- GitHub Workflow: High-level orchestration.
- GitHub Action: Handles the execution of the Nx executor, along with its dependencies.
Here’s what the updated workflow would look like:
GitHub Workflow File
name: AI-Powered PR Analysis
on:
pull_request:
types: [opened, synchronize]
jobs:
analyze-pr:
runs-on: ubuntu-latest
steps:
Step 1: Checkout the repository
- name: Checkout code
uses: actions/checkout@v3
Step 2: Run custom GitHub Action
- name: AI PR Analysis
uses: ./actions/ai-pr-analysis Path to your custom action
with:
pr_number: ${{ github.event.pull_request.number }}
repo_owner: ${{ github.repository_owner }}
repo_name: ${{ github.event.repository.name }}
base_branch: ${{ github.event.pull_request.base.ref }}
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
How Dependencies Are Handled
When a GitHub Action (your ai-pr-analysis action) requires dependencies, they are defined in the action's package.json and installed during runtime.
Key Steps GitHub Takes for Dependencies
- Environment Setup:
- When the workflow invokes the custom action, GitHub provisions an environment (
ubuntu-latestor other OS) with basic tools like Node.js.
- When the workflow invokes the custom action, GitHub provisions an environment (
- Dependencies Declared in
package.json:- You define all your dependencies in the custom action’s
package.json.
- You define all your dependencies in the custom action’s
- Install Command in the Action Logic:
- Your action ensures dependencies are installed using
npm installoryarn installduring execution.
- Your action ensures dependencies are installed using
Custom GitHub Action with Nx Executor
action.yml
This defines the GitHub Action and its inputs.
name: AI-Powered PR Analysis
description: Custom action to analyze PRs using AI and Nx executors.
inputs:
pr_number:
description: The PR number.
required: true
repo_owner:
description: The repository owner.
required: true
repo_name:
description: The repository name.
required: true
base_branch:
description: The base branch of the PR.
required: true
runs:
using: "node16"
main: "index.js"
package.json
Declare all dependencies for the custom action.
{
"name": "ai-pr-analysis-action",
"version": "1.0.0",
"description": "GitHub Action for AI-powered PR analysis using Nx executors.",
"main": "index.js",
"scripts": {
"start": "node index.js",
"build": "tsc",
"lint": "eslint . --ext .ts"
},
"dependencies": {
"axios": "^1.3.0",
"@your-scope/nx-ai-executor": "^1.0.0", // Nx executor package
"fs": "0.0.1-security"
},
"devDependencies": {
"typescript": "^4.5.5",
"eslint": "^8.10.0"
}
}
index.js
Install dependencies at runtime and execute the Nx executor.
const { exec } = require('child_process');
const core = require('@actions/core');
async function run() {
try {
const prNumber = core.getInput('pr_number');
const repoOwner = core.getInput('repo_owner');
const repoName = core.getInput('repo_name');
const baseBranch = core.getInput('base_branch');
const openAiApiKey = process.env.OPENAI_API_KEY;
if (!openAiApiKey) {
throw new Error('OPENAI_API_KEY environment variable is not set.');
}
console.log('Installing dependencies...');
exec('npm install', (installError) => {
if (installError) {
core.setFailed(`Failed to install dependencies: ${installError.message}`);
return;
}
console.log('Running Nx executor...');
exec(
`npx nx ai-pr-analyze --prNumber=${prNumber} --repoOwner=${repoOwner} --repoName=${repoName} --baseBranch=${baseBranch} --openAiApiKey=${openAiApiKey}`,
(error, stdout, stderr) => {
if (error) {
core.setFailed(`Executor failed: ${error.message}`);
return;
}
console.log(stdout);
core.setOutput('aiResponse', stdout);
}
);
});
} catch (error) {
core.setFailed(error.message);
}
}
run();
Key Considerations for Dependencies
-
Runtime Installation:
- Dependencies are installed at runtime via
npm installor pre-installed in your package build step. - This happens within the action’s execution environment, ensuring all dependencies are available.
- Dependencies are installed at runtime via
-
Caching Dependencies (Optional):
-
If your action is re-used frequently, you can add dependency caching to speed up builds:
- name: Cache Node.js modules
uses: actions/cache@v3
with:
path: ~/.npm
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
-
-
Nx Executors from npm:
- Ensure your Nx executor package (
@your-scope/nx-ai-executor) is published and accessible from the npm registry. - If it’s private, configure an
.npmrcfile or GitHub registry token for authentication.
- Ensure your Nx executor package (
Why This Works
- The GitHub Action dynamically installs dependencies using Node.js tooling.
- It delegates the heavy lifting to the Nx executor, keeping the action lightweight.
- Dependencies remain isolated to the action’s runtime environment, so no extra setup is needed in your workflow.
Conclusion
- Publish the Nx executor package to npm.
- Test the custom action with a workflow to ensure dependencies are installed correctly.
- Optionally, add caching to speed up builds for frequent runs.
Let me know if you want help publishing the package or refining this workflow!