Chm.ski Labs

Using Your GitHub Copilot Subscription Inside GitHub Actions

Use the Copilot subscription you already pay for directly inside a GitHub Action. No extra services, gateways, or infra. A single inline call to the chat endpoint so lightweight developer tasks run in CI—consistent, repeatable, not tied to a laptop.

Kamil Chmielewski 10 min read AI
GitHub Copilot and Actions integration workflow diagram GitHub Copilot and Actions integration workflow diagram
Experimental Warning

This post uses an undocumented Copilot API. It can change or vanish at any time. This is great for quick CI model experiments, but not for production commitments.

Hacking Copilot: Your Own Private LLM API for GitHub Actions

GitHub Copilot is useful in your editor, but it's a local tool. The moment you want to automate a repeatable task in CI—like summarizing a PR description or drafting a changelog snippet—you're stuck.

This post shows you how to call Copilot's undocumented API directly from a GitHub Actions workflow. No new services, no extra frameworks. Just a scripted POST request.

In about 10 minutes, you’ll have a working API key and a simple workflow that proves you can call an LLM from CI. It’s a hack, it’s unsupported, and it will probably break. But it’s the fastest way to start experimenting with LLMs in your automation pipeline.

Why Bother?

You already have Copilot. Locally it helps you write code. But the minute you want a repeatable workflow task—summarize a PR description, draft a changelog snippet, generate a tiny refactor suggestion—you’re back to ad‑hoc copy/paste.

Doing it "officially" might mean waiting for GitHub to release a feature, or paying for a separate AI service and managing another set of keys. But we’re not going to wait. We're going to script a POST request and see what happens.

The Mission: Get a Real Copilot Token

I wanted to use the Copilot subscription I already pay for inside GitHub Actions. I looked at wrappers like LiteLLM, but setting up a proxy service in a CI workflow is overkill for this task. The goal was a direct call, which meant the problem remained: how to get a raw Copilot token.

Most guides were random gists or repos with a script that said "run this." They usually asked for a GitHub Personal Access Token (PAT) with broad permissions. Handing over a PAT to a script I don't fully trust is a hard no. That’s just asking for trouble.

My next thought was to check the official editor integrations. I dug into the Copilot Vim plugin, hoping to find a simple, maintained script I could call from the command line. What I found was a layered mix of Lua and JavaScript, tightly coupled to the editor. It was too complex to be turned into the clean one-liner I wanted.

The Official-ish Backdoor

After more digging, I stumbled on a gist that used a hard-coded CLIENT_ID to get a token. This looked promising. A quick search for that ID across GitHub led me straight to the source: a small helper script inside the official VS Code Copilot Chat extension: microsoft/vscode-copilot-chat/script/setup/getToken.mts.

This was the solution. It’s first-party TypeScript, it uses the standard device OAuth flow (so no PATs are involved), and it’s dead simple. It launches the device flow in your browser, you click "approve," and it saves a bearer token to a local .env file.

Your 60-Second Token Grab

You have two paths here. Bun is faster if you have it.

Bun path (fastest)

This command downloads the script and runs it. Bun will auto-install the open dependency on the fly.

curl -fsSLO https://raw.githubusercontent.com/microsoft/vscode-copilot-chat/refs/heads/main/script/setup/getToken.mts
bun run --install=force getToken.mts

Node.js path

If you only have Node.js, you need a couple of extra steps to install dependencies.

npm init -y >/dev/null
npm install open tsx
curl -fsSLO https://raw.githubusercontent.com/microsoft/vscode-copilot-chat/refs/heads/main/script/setup/getToken.mts
npx tsx getToken.mts

Both methods will prompt you to authorize the app in your browser and then create a .env file containing GITHUB_OAUTH_TOKEN=<long-token>.

Secure the Token

Now, promote that token to a GitHub Actions secret.

  1. Open the newly created .env file and copy the token value.
  2. In your GitHub repository, go to Settings > Secrets and variables > Actions.
  3. Create a new repository secret named COPILOT_API_KEY.

I purposefully avoided the GITHUB_ prefix for the secret name. It reduces confusion with the built-in secrets.GITHUB_TOKEN and makes it clear this is a custom, third-party key.

Finally, the API endpoint is https://api.githubcopilot.com. It's good practice to store this in a variable so you can change it later without digging through code.

export COPILOT_API_BASE="https://api.githubcopilot.com"

First Contact: Test the Token Locally

Before you embed this token in a workflow, verify it works from your own machine. This confirms your token is valid and you can reach the API, saving you a headache later.

First, make sure you've exported the COPILOT_API_KEY and COPILOT_API_BASE variables from the previous step.

Listing Available Models

A simple, read-only way to test the connection is to ask the API what models it has available.

This command lists the available model IDs:

curl -s "$COPILOT_API_BASE/models" \
  -H "Authorization: Bearer $COPILOT_API_KEY" \
  -H "Copilot-Integration-Id: vscode-chat" \
  -H "Content-Type: application/json" | jq -r '.data[].id'

If this returns a list of model names like gpt-4o, you’re authenticated and ready for a real prompt.

The "Hello, World" Prompt

Now for the real test: sending a message. This one-liner curl command sends a short prompt to the chat completions endpoint.

This command sends a test prompt to the specified model:

MODEL="gpt-4o" # adjust if necessary

curl -sS \
  -H "Authorization: Bearer $COPILOT_API_KEY" \
  -H "Copilot-Integration-Id: vscode-chat" \
  -H "Content-Type: application/json" \
  -d '{"model":"'$MODEL'","messages":[{"role":"user","content":"Write 5 words about reverse engineering."}]}' \
  "$COPILOT_API_BASE/chat/completions" | jq '.choices[].message.content'

If you see a short, quoted string in response (e.g., "Unraveling secrets, one line time."), your token is good and you’re ready to move into a workflow.

Common Errors If the command fails, check the HTTP status code:

  • 401 Unauthorized: Your COPILOT_API_KEY is wrong, expired, or wasn't exported correctly.
  • 403 Forbidden: The model you requested might be blocked, or your account doesn't have the right entitlement.
  • 404 Not Found: The API endpoint path (/chat/completions) is likely incorrect. Double-check the URL.

The Payoff: A Minimal GitHub Action

If the local curl command worked, you have everything you need. Now we'll create a GitHub Actions workflow that runs on a manual trigger (workflow_dispatch). This keeps it out of your main CI loop until you're ready.

The Inline Script Heresy

We're going to put the entire script inside the YAML file. Is this a good practice for production? Absolutely not. But for a quick proof-of-concept, it's perfect. It keeps the experiment self-contained in a single file that you can easily delete or change later.

Create a new file at .github/workflows/copilot-poem.yml:

name: copilot-poem

on:
  workflow_dispatch:

jobs:
  poem:
    runs-on: ubuntu-latest
    steps:
      - name: Run poem script
        env:
          COPILOT_API_BASE: https://api.githubcopilot.com
          COPILOT_API_KEY: ${{ secrets.COPILOT_API_KEY }}
        run: |
          node -e '
          const apiBase = process.env.COPILOT_API_BASE;
          const apiKey = process.env.COPILOT_API_KEY;
          const model = "gpt-4o"; // Hard-coded for this quickstart

          async function main() {
            const body = JSON.stringify({
              model,
              messages: [
                { role: "system", content: "You write terse, technical poems." },
                { role: "user", content: "Write a 4-line poem about build pipelines." }
              ],
              temperature: 0.7,
              max_tokens: 120
            });

            const res = await fetch(`${apiBase}/chat/completions`, {
              method: "POST",
              headers: {
                "Authorization": `Bearer ${apiKey}`,
                "Copilot-Integration-Id": "vscode-chat",
                "Content-Type": "application/json"
              },
              body
            });

            if (!res.ok) {
              console.error("Copilot chat API error:", res.status, await res.text());
              process.exit(1);
            }

            const json = await res.json();
            const out = json.choices?.[0]?.message?.content || "<no content>";
            console.log("--- Generated Poem ---\n" + out + "\n----------------------");
          }

          main().catch(e => { console.error(e); process.exit(1); });
          '

Commit this file, push it, and go to the "Actions" tab in your GitHub repository. You should see a "copilot-poem" workflow. Trigger it manually. If all goes well, the job log will contain a short, AI-generated poem about build pipelines.

Keep it Disposable An inline script is a feature, not a bug, for experiments like this. It lets you test an idea without cluttering your repository. Once you decide the idea has merit, you can graduate to a proper, version-controlled script file.

Now What? From Toy to Tool

You have a working proof-of-concept. Now you can start building something useful. Here are some practical upgrades to consider, in rough order of effort:

  1. Move the script to a file. That inline Node.js code was great for a test, but it's unmanageable. Move it to a dedicated .js or .ts file in your repository. This gives you syntax highlighting, linting, and version history.
  2. Add retries. The API will occasionally fail with a 429 Too Many Requests or a 5xx server error. Implement a simple retry mechanism with exponential backoff to make your workflow more resilient.
  3. Expose the prompt as an input. Hard-coding the prompt is fine for a test, but you'll want to pass it in dynamically. Use a workflow_dispatch input to let users provide their own prompt when they run the action.
  4. Feed it real data. The real power comes from using context from GitHub. Use the github context object in your workflow to pull in the body of an issue, the description of a pull request, or the content of a commit message and feed it into the prompt.
  5. Implement model fallbacks. The best model for your task might not always be available. Write your script to try a preferred model first (e.g., a faster, cheaper one) and fall back to a more powerful one if the first call fails.
  6. Add structured logging. When things go wrong, you'll want to know why. Log the model used, the response time, and an approximation of token usage. This will help you debug issues and monitor costs if this ever becomes a real, billable service.

A Word of Warning: This Will Break

Now for the dose of reality. This entire setup is built on an undocumented, unsupported API. It is a lab bench, not production steel. Be prepared for it to break without warning.

Here are the specific points of fragility:

  • Opaque Token Lifetime: The token you generated has no documented expiration. It will eventually expire, probably silently. You'll only know when your workflow starts failing with 401 errors.
  • Vanishing Model IDs: The model names (gpt-4o, etc.) are not part of a stable, public API. They can be renamed or removed at any time.
  • Undocumented Rate Limiting: You will get rate-limited (429 errors), but the exact limits are a black box. Expect to discover them through trial and error.
  • Terms of Service Concerns: You are using an API intended for an interactive editor in an automated, non-interactive context. It is your responsibility to review the GitHub Copilot terms of service and assess whether this usage is compliant.

Treat this as a powerful but brittle tool for experimentation. It's not something you should build critical, production infrastructure on.

The Final Word

You now have a reproducible CI call hitting Copilot’s chat API with zero extra infrastructure. That’s enough to automate small textual tasks and explore model behavior under real workflow conditions. It’s a powerful way to prototype AI-driven automations and see what’s possible.

Just remember what this is: a hack. When you build something that your team starts to depend on, switch to an official, documented, and supported API from a real provider.

Now, go break something.

Tags: github actions github copilot llm api quickstart automation

Menu

Settings