Awesome
<p align="center"> <a href="https://github.com/Issafalcon/neotest-dotnet/actions/workflows/main.yml"> <img alt="GitHub Workflow Status" src="https://img.shields.io/github/actions/workflow/status/Issafalcon/neotest-dotnet/main.yml?label=main&style=for-the-badge"> </a> <a href="https://github.com/Issafalcon/neotest-dotnet/releases"> <img alt="GitHub release (latest SemVer)" src="https://img.shields.io/github/v/release/Issafalcon/neotest-dotnet?style=for-the-badge"> </a> </p>Neotest .NET
Neotest adapter for dotnet tests
- Covers the "majority" of use cases for the 3 major .NET test runners
- Attempts to provide support for
SpecFlow
generated tests for the various test runners- Support for this may still be patchy, so please raise an issue if it doesn't behave as expected
RunNearest
orRunInFile
functions will need to be run from the generated specflow tests (NOT the.feature
)
Pre-requisites
neotest-dotnet requires makes a number of assumptions about your environment:
- The
dotnet sdk
that is compatible with the current project is installed and thedotnet
executable is on the users runtime path (future updates may allow customisation of the dotnet exe location) - The user is running tests using one of the supported test runners / frameworks (see support grid)
- (For Debugging)
netcoredbg
is installed andnvim-dap
plugin has been configured fornetcoredbg
(see debug config for more details) - Requires nvim-treesitter and the parser for C#.
Installation
Packer
use({
"nvim-neotest/neotest",
requires = {
{
"Issafalcon/neotest-dotnet",
},
}
})
vim-plug
Plug 'https://github.com/nvim-neotest/neotest'
Plug 'https://github.com/Issafalcon/neotest-dotnet'
Usage
require("neotest").setup({
adapters = {
require("neotest-dotnet")
}
})
Additional configuration settings can be provided:
require("neotest").setup({
adapters = {
require("neotest-dotnet")({
dap = {
-- Extra arguments for nvim-dap configuration
-- See https://github.com/microsoft/debugpy/wiki/Debug-configuration-settings for values
args = {justMyCode = false },
-- Enter the name of your dap adapter, the default value is netcoredbg
adapter_name = "netcoredbg"
},
-- Let the test-discovery know about your custom attributes (otherwise tests will not be picked up)
-- Note: Only custom attributes for non-parameterized tests should be added here. See the support note about parameterized tests
custom_attributes = {
xunit = { "MyCustomFactAttribute" },
nunit = { "MyCustomTestAttribute" },
mstest = { "MyCustomTestMethodAttribute" }
},
-- Provide any additional "dotnet test" CLI commands here. These will be applied to ALL test runs performed via neotest. These need to be a table of strings, ideally with one key-value pair per item.
dotnet_additional_args = {
"--verbosity detailed"
},
-- Tell neotest-dotnet to use either solution (requires .sln file) or project (requires .csproj or .fsproj file) as project root
-- Note: If neovim is opened from the solution root, using the 'project' setting may sometimes find all nested projects, however,
-- to locate all test projects in the solution more reliably (if a .sln file is present) then 'solution' is better.
discovery_root = "project" -- Default
})
}
})
Additional dotnet test
arguments
As well as the dotnet_additional_args
option in the adapter setup above, you may also provide additional CLI arguments as a table to each neotest
command.
By doing this, the additional args provided in the setup function will be replaced in their entirety by the ones provided at the command level.
For example, to provide a runtime
argument to the dotnet test
command, for all the tests in the file, you can run:
require("neotest").run.run({ vim.fn.expand("%"), dotnet_additional_args = { "--runtime win-x64" } })
NOTE:
- The
--logger
and--results-directory
arguments, as well as the--filter
expression are all added by the adapter, so changing any of these will likely result in errors in the adapter. - Not all possible combinations of arguments will work with the adapter, as you might expect, given the way that output is specifically parsed and handled by the adapter.
Debugging
Debugging Using neotest dap strategy
- Install
netcoredbg
to a location of your choosing and configurenvim-dap
to point to the correct path - The example below uses the
mason.nvim
default install path:
local install_dir = path.concat{ vim.fn.stdpath "data", "mason" }
dap.adapters.netcoredbg = {
type = 'executable',
command = install_dir .. '/packages/netcoredbg/netcoredbg',
args = {'--interpreter=vscode'}
}
Neotest-Dotnet uses a custom strategy for debugging, as netcoredbg
needs to attach to the running test. The test command is modified by setting the VSTEST_HOST_DEBUG
env variable, which then waits for the debugger to attach.
To use the custom strategy, you no longer need to provide a custom command other than the standard neotest recommended one for debugging:
lua require("neotest").run.run({strategy = "dap"})
The adapter will replace the standard dap
strategy with the custom one automatically.
Framework Support
The adapter supports NUnit
, xUnit
and MSTest
frameworks, to varying degrees. Given each framework has their own test runner, and specific features and attributes, it is a difficult task to support all the possible use cases for each one.
To see if your use case is supported, check the grids below. If it isn't there, feel free to raise a ticket, or better yet, take a look at how to contribute and raise a PR to support your use case!
Key
:heavy_check_mark: = Fully supported
:part_alternation_mark: = Partially Supported (functionality might behave unusually)
:interrobang: = As yet untested
:x: = Unsupported (tested)
NUnit
Framework Feature | Scope Level | Docs | Status | Notes |
---|---|---|---|---|
Test (Attribute) | Method | Test - Nunit | :heavy_check_mark: | Supported when used inside a class with or without the TestFixture attribute decoration |
TestFixture (Attribute) | Class | TestFixture - Nunit | :heavy_check_mark: | |
TestCase() (Attribute) | Method | TestCase - Nunit | :heavy_check_mark: | Support for parameterized tests with inline parameters. Supports neotest 'run nearest' and 'run file' functionality |
Nested Classes | Class | :heavy_check_mark: | Fully qualified name is corrected to include + when class is nested | |
Theory (Attribute) | Method | Theory - Nunit | :x: | Currently has conflicts with XUnits Theory which is more commonly used |
TestCaseSource (Attribute) | Method | TestCaseSource - NUnit | :heavy_check_mark: | Bundles all dynamically parameterized tests under one neotest listing (short output contains errors for all tests. One test failure displays failure indicator for entire test "grouping"). Supports neotest 'run nearest' and 'run file' functionality |
xUnit
Framework Feature | Scope Level | Docs | Status | Notes |
---|---|---|---|---|
Fact (Attribute) | Method | Fact - xUnit | :heavy_check_mark: | |
Theory (Attribute) | Method | Theory - xUnit | :heavy_check_mark: | Used in conjunction with the InlineData() attribute |
InlineData() (Attribute) | Method | Theory - xUnit | :heavy_check_mark: | Support for parameterized tests with inline parameters. Supports neotest 'run nearest' and 'run file' functionality |
ClassData() (Attribute) | Method | ClassData - xUnit | :heavy_check_mark: | Bundles all dynamically parameterized tests under one neotest listing (short output contains errors for all tests. One test failure displays failure indicator for entire test "grouping"). Supports neotest 'run nearest' and 'run file' functionality |
Nested Classes | Class | :heavy_check_mark: | Fully qualified name is corrected to include + when class is nested |
MSTest
Framework Feature | Scope Level | Docs | Status | Notes |
---|---|---|---|---|
TestMethod (Attribute) | Method | TestMethod - MSTest | :heavy_check_mark: | |
TestClass (Attribute) | Class | TestClass - MSTest | :heavy_check_mark: | |
Nested Classes | Class | :heavy_check_mark: | Fully qualified name is corrected to include + when class is nested | |
DataTestMethod (Attribute) | Method | DataTestMethod - MSTest | :heavy_check_mark: | |
DataRow (Attribute) | Method | DataRow - MSTest | :heavy_check_mark: | Support for parameterized tests with inline parameters. Supports neotest 'run nearest' and 'run file' functionality |
Limitations
- A tradeoff was made between being able to run parameterized tests and the specificity of the
dotnet --filter
command options. A more lenient 'contains' type filter is used in order for the adapter to be able to work with parameterized tests. Unfortunately, no amount of formatting would support specificFullyQualifiedName
filters for the dotnet test command for parameterized tests. - Dynamically parameterized tests need to be grouped together as neotest-dotnet is unable to robustly match the full test names that the .NET test runner attaches to the tests at runtime.
- An attempt was made to use
dotnet test -t
to extract the dynamic test names, but this was too unreliable (duplicate test names were indistinguishable, and xUnit was the only runner that provided fully qualified test names)
- An attempt was made to use
- See the support guidance for feature and language support
- F# is currently unsupported due to the fact there is no complete tree-sitter parser for F# available as yet (https://github.com/baronfel/tree-sitter-fsharp)
- As mentioned in the Debugging section, there are some discrepancies in test output at the moment.
NUnit Limitations
- Using the
[Test]
attribute alongside[TestCase]
attributes on the same method will causeneotest-dotnet
to duplicate the item with erroneous nesting in the test structure. This will also break the ability of neotest to run the test cases e.g:
[Test]
[TestCase(1)]
[TestCase(2)]
public void Test_With_Parameters(int a)
{
Assert.AreEqual(2, a);
}
- The workaround is to instead, remove the redundant
[Test]
attribute.
Contributing
Any help on this plugin would be very much appreciated. It has turned out to be a more significant effort to account for all the Microsoft dotnet test
quirks
and various differences between each test runner, than I had initially imagined.
First Steps
If you have a use case that the adapter isn't quite able to cover, a more detailed understanding of why can be achieved by following these steps:
- Setting the
loglevel
property in yourneotest
setup config to1
to reveal all the debug logs from neotest-dotnet - Open up your tests file and do what your normally do to run the tests
- Look through the neotest log files for logs prefixed with
neotest-dotnet
(can be found by running the commandecho stdpath("log")
) - You should be able to piece together how the nodes in the neotest summary window are created (Using logs from tests that are "Found")
- The Tree for each test run is printed as a list (search for
Creating specs from tree
) from each test run - The individual specs usually follow after in the log list, showing the command and context for each spec
TRX Results Output
can be searched to find out how neotest-dotnet is parsing the test output files- Final results are tied back to the original list of discovered tests by using a set of conversion functions:
Test Nodes
are logged - these are taken from the original node tree list, and filtered to include only the test nodes and their children (if any)Intermediate Results
are obtained and logged by parsing the TRX output into a list of test results- The test nodes and intermediate results are passed to a function to correlate them with each other. If the test names in the nodes match the test names from the intermediate results, a final neotest-result for that test is returned and matched to the original test position from the very initial tree of nodes
Usually, if tests are not appearing in the neotest
summary window, or are failing to be discovered by individual or grouped test runs, there will usually be an issue with one of the above steps. Carefully examining the names in the original node list and the names of the tests in each of the result lists, usually highighlights a mismatch.
- Narrow down the function where you think the issue is.
- Look through the unit tests (named by convention using
<filename_spec.lua>
) and check if there is a test case covering the use case for your situation - Write a test case that would enable your use case to be satisfied
- See that the test fails
- Try to fix the issue until the test passes
Running Tests
To run the plenary tests from CLI, in the root folder, run
make test