Skip to content

Copilot Chat Integration

Please note that Wallaby GitHub Copilot chat integration feature is available in VS Code only.

The best AI model for investigating unit test errors is the one with the most context, provided by the user and the right tools. Wallaby automatically gives AI what it needs: execution paths, test coverage, and runtime values - to debug smarter.

How to use Wallaby with GitHub Copilot Chat

Simply install Wallaby and GitHub Copilot (which has a free tier for everyone) extensions in your VS Code, start Wallaby, and use the new Investigate with AI icon next to the failing test. The command is also available via the Command Palette and the code lens above the failing test.

Wallaby with GitHub Copilot

Wallaby allows you to choose between different modes of operation, such as Investigation, Analytical Fix and Direct Fix.

  • Investigation mode provides in-depth analysis of the failing test, outlining possible causes and next steps. The LLM is instructed to not modify code in this mode.
  • Analytical Fix mode performs a detailed analysis using all available tools, ideal for complex or unclear failures.
  • Direct Fix mode analyzes the issue and applies a fix with minimal explanation, best for straightforward problems.

After choosing the mode, Wallaby will open a new Copilot Chat to ask the AI model to investigate/fix the failing test. The AI model will analyze the provided error details. The model may then request additional context from Wallaby on the fly, such as:

  • Code coverage: Wallaby provides not only the percentage figure / overall coverage of all tests, but also the exact execution path that led to the failure. With this information, the AI model knows what lines of code were specifically executed by the failing test and doesn’t need to guess to provide more accurate results.
  • Runtime values: Similar to how you can hover over any source code expression to explore its value with Wallaby, the AI model can ask Wallaby for the runtime values of any expression in the code to support its investigation.

Finally, the results of the investigation will be displayed in the chat and the fix will be applied to your code if you choose to do so.

How does it work

The Wallaby extension registers Wallaby AI tools via VS Code Copilot API. When you click on the Investigate with AI icon, Wallaby automatically opens a new Copilot Chat and provides the model with the prompt to investigate the failing test.

The AI model is able to analyze the provided error details and request additional context, runtime values, execution paths, and test coverage directly from Wallaby. By default, Copilot prompts you to allow the AI model to access the requested context, but you can also configure it to automatically provide the context to the AI model.

Once the model has all the necessary context, it provides the results of the investigation in the chat or applies the fix to your code.

Tips and tricks

  • In addition to Wallaby’s built-in prompts, you can use your own prompts to leverage Wallaby AI tools. For example, you can ask the AI model:

    • Analyze coverage of test under cursor using Wallaby coverage tools
    • What tests are affected by my current code changes?
    • List functions of source.ts covered by both "test 1" and "test 2"
  • You can also create your own custom prompts for specific use cases by using Copilot custom instructions.

Wallaby MCP
  • If you are using the Copilot free tier, please note that because Wallaby integration sends chat messages, limits may apply.

  • If you see that the AI model is not using the provided context effectively, or is outputting errors (such as Tool multi_tool_use.parallel was not contributed) or strange results, you may try to re-run the chat with the same or a different AI model.

Security and privacy considerations

Before the first use of the Wallaby AI integration, you will be prompted by VS Code to allow Wallaby to access Copilot models.

During the initial request of the error investigation feature use, Wallaby will not provide any source code or runtime values to the AI model, only the failing test identifier.

After the initial request, the AI model may request additional context, like runtime values, and test coverage from Wallaby. By default, Wallaby will prompt you to allow the AI model to access the requested context with the full description of what is being accessed, but you can also configure Copilot to automatically provide the context to the AI model.