Skip to content

Draft spec for auto-discovering feedback provider and tab-completer#386

Open
daxian-dbw wants to merge 11 commits intoPowerShell:masterfrom
daxian-dbw:feedback-completer-autodiscovery
Open

Draft spec for auto-discovering feedback provider and tab-completer#386
daxian-dbw wants to merge 11 commits intoPowerShell:masterfrom
daxian-dbw:feedback-completer-autodiscovery

Conversation

@daxian-dbw
Copy link
Copy Markdown
Member

@daxian-dbw daxian-dbw commented Mar 19, 2025

This is a draft spec for auto-discovering feedback provider and tab-completer. Relevant GitHub issues:


This change is Reviewable


Today, to enable a feedback provider or a tab-completer for a native command,
a user has to import a module or run a script manually, or do that in their profile.
There is no way to auto-discover a feedback provider or a tab-completer for a specific native command.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we consider predictors in this set of supported auto discoverable "things"?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For example the completion predictor may be good to auto register since it doesnt really have any commands

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There won't be a specific trigger for any predictor, so if needed, a predictor module can just be placed under the _startup_ folder of Feedbacks or Completions.

Think about it more, I guess we may want to unify the _startup_ folders from feedbacks and completions, since they all will be loaded at startup. Maybe we should have a startup folder at the same level of feedbacks and completions, so any predictor/feedback provider/tab-completer that needs to be loaded at session startup can be put there.

Comment thread Draft-Accepted/feedback_completer_auto_discovery.md

2. Should we add another key to indicate the target OS?
- A feedback provider may only work on a specific OS, such as the `"WinGet CommandNotFound"` feedback provider only works on Windows.
- Such a key could be handy if a user wants to share the feedback/tab-completer configurations among multiple machines via a cloud drive.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this could be good, thinking about the linux cmd-not-found predictor. Also if a user wants to bring this configuration across their different machines they dont need to have multiple different folder structures? I am not entirely sure how they would necessarily do it but I know some community folks use external tools to share their $PROFILE across machines.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

they dont need to have multiple different folder structures?

Folder structures under feedbacks and completions are the same. I guess a user can create a symbolic link to make <personal>/powershell/feedbacks or <personal>/powershell/completions points to the folders from a cloud drive.

- A feedback provider may only work on a specific OS, such as the `"WinGet CommandNotFound"` feedback provider only works on Windows.
- Such a key could be handy if a user wants to share the feedback/tab-completer configurations among multiple machines via a cloud drive.

3. Do we really need a folder for each feedback provider?
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is. there a reason why this all can't be in a single json file? i.e feedbackproviders.json?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tool installation needs to easily deploy/remove the auto-discovery configuration. We want to avoid updating a single file to make that easy.

@iSazonov
Copy link
Copy Markdown
Contributor

I see the key elements here - autodiscover, autoload, and trigger.
All this is already available in PowerShell modules.
Why not just adapt the existing mechanism to meet new needs?

One option was suggested by me a few years ago PowerShell/PowerShell#13428
This allows us to load any necessary extensions for native commands.

For example, if we introduce a naming standard, we can put Native-dotnet in any module and the engine (NativeCommandProcessor) will trigger finding this module and load, for example, an assembly/script with completer for dotnet command.
(If we have to strictly follow the existing naming rules for cmdlets, then the name could be Get-NativeCommand<name>- Get-NativeCommanddotnet)

If we need a CommandNotFound feedback provider for WinGet, we can agree that we are looking for a module with a name like FeedbackProviderWinGet where FeedbackProvider is a search key for the scenario.

Everything for modules is familiar and understandable to users and it can be implemented in small steps.

whose file names should match names of the commands.
Those completion scripts are loaded only when their corresponding commands' completion is triggered for the first time.

We will have separate directories for feedback providers and tab-completers, for 2 reasons:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will make it not possible for an Appx app to participate since they can't place files outside of their own installed folder

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mentioned Appx and MSIX packages in the "Discussion Points" below. I think for those apps:

  • If they want to provide tab-completer, then the tab-completer needs to be exposed by running the tool with a special flag, such as <tool> --ps-completion, then the user can manually create the "deployment+configuration" for the tool using the output script. Or, even better, the user can just run the output script, which will create the deployment automatically.
  • If they want to provide feedback provider, I'm not sure how that will be possible. According to Add a way to lazily autoload argument completers PowerShell#17283 (comment), the tool's DLL should not be used by another process, but feedback provider and predictor are binary implementation only.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@daxian-dbw I believe this has been resolved so that we can enable Appx packaged commands to expose PS integration, can you update?

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will make it not possible for an Appx app to participate since they can't place files outside of their own installed folder

They could, but it's strongly discouraged. We've done it before in rare cases and they've all gone sideways griefing devs, users and us. They also degrade MSIX's security and integrity models. So, yeah, we prefer not to go here w/o suuuuper high business justification, and even then it usually needs a huge backwards compatibility element.

I don't see sufficient justification here, especially not with Powershell under active development. There are better answers.

Comment thread Draft-Accepted/feedback_completer_auto_discovery.md Outdated
Co-authored-by: Steve Lee <slee@microsoft.com>
Comment thread Draft-Accepted/feedback_completer_auto_discovery.md Outdated
Comment thread Draft-Accepted/feedback_completer_auto_discovery.md Outdated
daxian-dbw and others added 2 commits April 3, 2025 11:44
Co-authored-by: Travis Plunk <travis.plunk@microsoft.com>
Co-authored-by: Travis Plunk <travis.plunk@microsoft.com>
@andrewducker
Copy link
Copy Markdown

Possibly silly question, particularly at this late stage, but as a DLL can contain a bunch of information for how to access it through PowerShell, and in .Net an exe is just another assembly, why not have it be queryable for what its capabilities are?
The first time you access the exe PowerShell can query it to see what commands it supports, and from then on you just get autocompletion.
Effectively, use Import-Module on the EXE, and then direct the commands to the Cmdlet classes you just loaded in.

Or am I missing something obvious?

@andrewducker
Copy link
Copy Markdown

Or am I missing something obvious?

I am! This is for entirely non-PowerShell commands which we want to have autocompletion for. Sorry, my misunderstanding.

if there is any, will be from the same module.
So, when the command becomes available, the feedback provider and/or tab-completer will become available too.

Therefore, the auto-discovery for feedback provider and tab-completer targets native commands only.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just for the record, there's been a lot of discussion in Discord chat where people are going so far as to use dynamic parameters just to try and create argument completion for standalone scripts. It might be worth considering them as well.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that motivated by the inability to define custom classes/enums and dynamic completers in scripts before the parameter block, or am I misunderstanding the request? If so, I feel like a better solution would be to lift that limitation, instead of providing a way to define an external completer.

Comment on lines +94 to +99
```powershell
@{
module = '<module-name-or-path>[@<version>]', # Module to load to register the feedback provider.
arguments = @('<arg1>', '<arg2>'), # Optional arguments for module loading.
disable = $false, # Control whether auto-discovery should find this feedback provider.
}
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[discussion point] With this design, a use can specify only 1 feedback provider for a native command.
But, would there be any reason we want to allow multiple feedback providers for a single native command?

Note that it's not possible to have multiple completers for a single native command because it's 1-to-1 mapping in native completer table, unless the user can provide a custom completer script block that aggregate the completion results from multiple tab completers.

Given that we don't support it for native completers, I don't think we should support it for feedback provider either.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I honestly don't want to see "feedback providers" included in this. Who is asking for that? What's the use case?

I say this not just because I have little interest in installing "things" that allow a third-party to be called each time I use a tool, but because I hate to see the team spending more time on a feature which, as far as I can tell, hasn't been used by anyone but WinGet and @JustinGrote since it was shipped 😉

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd say you have a fair point :)
If we only care about lazy loading of tab completion for native executables, maybe https://github.com/daxian-dbw/PSNativeToolCompletion is sufficient. It depends on the 'fall-back' completer change introduced in 7.6.

@{
module = '<module-name-or-path>[@<version>]', # Module to load to register the completer.
script = '<script-path>', # Script to run to register the completer.
arguments = @('<arg1>', '<arg2>'), # Optional arguments for module loading or script invocation.
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Discussion point] Do we really need the arguments key?
I personally think it provides additional flexibility. Any concerns if we keep this key?

Different discussions:
1. Do we really need a folder for each feedback provider?
- [dongbo] Yes, I think we need.
Appx and MSIX packages on Windows have [many constraints](https://github.com/PowerShell/PowerShell/issues/17283#issuecomment-1522133126) that make it difficult to integrate with a broader plugin ecosystem. The way for such an Appx/MSIX tool to expose tab-completer could be just running the tool with a special flag, such as `<tool> --ps-completion`,
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Discussion point] Appx/MSIX package cannot integrate with a broader plugin ecosystem easily [reference]:

  • An MSIX package cannot change $PATH
    • An MSIX package cannot set a global system environment variable of any sort on install
  • An MSIX package cannot write to a user folder (or a system folder outside of its package root) on install.

Additional constraints that make it difficult to integrate Appx/MSIX with a broader plugin ecosystem:

  • MSIX packages are "sealed": you're not supposed to be able to poke at their insides without explicit permission
  • Packages are not designed to have their DLLs used directly in-proc from another process, because that may break the updatability guarantees around files inside the package not being used.

We should support Appx/MSIX packages without requiring user's manual work. But, how to achieve it given the constrains?

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's trivial. An extensibility model has 2 requirements

  1. Discovery
  2. Access

Both have multiple options with the simplest being Discovery=AppExtension and Access=DynamicDependencies. More details below.

Discovery

A Powershell extension (be it completer or other scenarios) needs to advertise it exists such that Powershell can discover it. MSIX' standard recommendation is for the packaged app to declare an appextension in its manifest and Powershell uses the AppExtensionCatalog API to enumerate these extensions and any needed info about them.

Example snippet in appxmanifest.xml

<uap3:Extension
    Category="windows.appExtension">
    <uap3:AppExtension
        Name="com.microsoft.powershell.completer"
        Id="winget.completer"
        PublicFolder="public"
        DisplayName="WinGet Completer"/>
</uap3:Extension>

and sample enumeration

var catalog = new AppExtensionCatalog.Open("com.microsoft.powershell.completer")
foreach (var extension in catalog)
{
    ...use it...
}

Access

Now that you know a package has Powershell extension(s) of interest you need to access file(s) in the package. This requires Windows knows Powershell is using a package's content. The simplest solution is to use the Dynamic Dependencies API to tell Windows you're using the package's content. This ensures Windows doesn't try to service the package (e.g. updating or removing the package) while in use. It also adds the package to the Powershell process' package graph, making its resources available for LoadLibrary, ActivateInstance and other APIs.

Recommended But Not Only

These are the commonly recommended solutions for Discovery and Access but there are other options.

Discovery Alternatives

For access the windows.packageExtension can be a good alternative. The windows.appExtension extension has app-scope i.e. you must declare it under an <Application> and thus can only be defined in Main or Optional packages. windows.packageExtension which has package-scope (no app required) and can be declared in any package (including Resource and Framework packages) and use the PackageExtensionCatalog API to enumerate them. This is new in 24H2. The recommendation if you need downlevel support is to use AppExtension (supported since ~RS1) or use both. The latter's advantage is it supports all package types and simpler developer experience in some cases. Using both provides best experience while also providing downlevel support.

Access Alternatives

Dynamic Dependencies enables you to use several tech to interact with a packaged Powershell extension, e.g. WinRT inproc and out-of-proc (ActivateInstance), inproc DLLs (LoadLibrary), inproc .NET assemblies (Assembly.Load*()), read a text file (CreateFile(GetPackagePath()+"\foo.ps1") and other like tech.

NOTE: Inproc is generally discouraged for extensibility models given the reliability and security implications. Technically feasible of course as there are times it's necessary (especially older legacy software), but if you can do better, do better :-)

There are other options if you need code a package -- Packaged COM OOP servers, AppServices, etc. Some don't require Dynamic Dependencies but bring their other caveats to the table so while supported the Dynamic Dependencies options are recommended.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And note, the MSIX discovery+access recommendations are easily secure. Package content is strongly secure and verifiably trustable. When Powershell uses a package's content it can be assured the content is authentic and unmodified from the developer.

Package authors can be Store signed or declare

<uap10:PackageIntegrity>
  <uap10:Content Enforcement="on"/>
</uap10:PackageIntegrity>

and its content is hardened even a process running as LocalSystem can't alter the package's files. Powershell can use various APIs to confirm the package is signed and is unmodified e.g. Package.SignatureKind, Package.Status.VerifyIsOK(), Package.VerifyContentIntegrityAsync(). This can jive well with Powershell's ExecutionPolicy options.

@iSazonov
Copy link
Copy Markdown
Contributor

@daxian-dbw Could you please explain why PowerShell module infrastructure can not be used/adopted for auto-discovering feedback providers and tab-completers?
I described one approach earlier. Here second one.

All we need to do is

  • Add new triggers. This is unavoidable with any approach.
  • Provide meta information. Here we have alternatives.

We can use standard psd1 module manifest to expose metadata. For compatibility reasons, we cannot add a new keyword (CompletersToExport), but we can use existing one FunctionsToExport provided that we accept the naming convention.
For example, it could be in psd1:

FunctionsToExport = @(
    PSTabCompleter-git,
    PSFeedbackProvider-msiexec
)

Such function could return IArgumentCompleter/IFeedbackProvider and/or register the entity.

This provides, with minimal effort, almost all the needs that we expect - auto-discovering, lazy loading, signing, versioning and so on.

Different discussions:
1. Do we really need a folder for each feedback provider?
- [dongbo] Yes, I think we need.
Appx and MSIX packages on Windows have [many constraints](https://github.com/PowerShell/PowerShell/issues/17283#issuecomment-1522133126) that make it difficult to integrate with a broader plugin ecosystem. The way for such an Appx/MSIX tool to expose tab-completer could be just running the tool with a special flag, such as `<tool> --ps-completion`,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's trivial. An extensibility model has 2 requirements

  1. Discovery
  2. Access

Both have multiple options with the simplest being Discovery=AppExtension and Access=DynamicDependencies. More details below.

Discovery

A Powershell extension (be it completer or other scenarios) needs to advertise it exists such that Powershell can discover it. MSIX' standard recommendation is for the packaged app to declare an appextension in its manifest and Powershell uses the AppExtensionCatalog API to enumerate these extensions and any needed info about them.

Example snippet in appxmanifest.xml

<uap3:Extension
    Category="windows.appExtension">
    <uap3:AppExtension
        Name="com.microsoft.powershell.completer"
        Id="winget.completer"
        PublicFolder="public"
        DisplayName="WinGet Completer"/>
</uap3:Extension>

and sample enumeration

var catalog = new AppExtensionCatalog.Open("com.microsoft.powershell.completer")
foreach (var extension in catalog)
{
    ...use it...
}

Access

Now that you know a package has Powershell extension(s) of interest you need to access file(s) in the package. This requires Windows knows Powershell is using a package's content. The simplest solution is to use the Dynamic Dependencies API to tell Windows you're using the package's content. This ensures Windows doesn't try to service the package (e.g. updating or removing the package) while in use. It also adds the package to the Powershell process' package graph, making its resources available for LoadLibrary, ActivateInstance and other APIs.

Recommended But Not Only

These are the commonly recommended solutions for Discovery and Access but there are other options.

Discovery Alternatives

For access the windows.packageExtension can be a good alternative. The windows.appExtension extension has app-scope i.e. you must declare it under an <Application> and thus can only be defined in Main or Optional packages. windows.packageExtension which has package-scope (no app required) and can be declared in any package (including Resource and Framework packages) and use the PackageExtensionCatalog API to enumerate them. This is new in 24H2. The recommendation if you need downlevel support is to use AppExtension (supported since ~RS1) or use both. The latter's advantage is it supports all package types and simpler developer experience in some cases. Using both provides best experience while also providing downlevel support.

Access Alternatives

Dynamic Dependencies enables you to use several tech to interact with a packaged Powershell extension, e.g. WinRT inproc and out-of-proc (ActivateInstance), inproc DLLs (LoadLibrary), inproc .NET assemblies (Assembly.Load*()), read a text file (CreateFile(GetPackagePath()+"\foo.ps1") and other like tech.

NOTE: Inproc is generally discouraged for extensibility models given the reliability and security implications. Technically feasible of course as there are times it's necessary (especially older legacy software), but if you can do better, do better :-)

There are other options if you need code a package -- Packaged COM OOP servers, AppServices, etc. Some don't require Dynamic Dependencies but bring their other caveats to the table so while supported the Dynamic Dependencies options are recommended.

Different discussions:
1. Do we really need a folder for each feedback provider?
- [dongbo] Yes, I think we need.
Appx and MSIX packages on Windows have [many constraints](https://github.com/PowerShell/PowerShell/issues/17283#issuecomment-1522133126) that make it difficult to integrate with a broader plugin ecosystem. The way for such an Appx/MSIX tool to expose tab-completer could be just running the tool with a special flag, such as `<tool> --ps-completion`,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And note, the MSIX discovery+access recommendations are easily secure. Package content is strongly secure and verifiably trustable. When Powershell uses a package's content it can be assured the content is authentic and unmodified from the developer.

Package authors can be Store signed or declare

<uap10:PackageIntegrity>
  <uap10:Content Enforcement="on"/>
</uap10:PackageIntegrity>

and its content is hardened even a process running as LocalSystem can't alter the package's files. Powershell can use various APIs to confirm the package is signed and is unmodified e.g. Package.SignatureKind, Package.Status.VerifyIsOK(), Package.VerifyContentIntegrityAsync(). This can jive well with Powershell's ExecutionPolicy options.

so for a predictor to be auto-discovered, it has to be loaded at the startup of an interactive session.

Given that, maybe it's better to have a unified location for all the load-at-startup configurations:

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MSIX packages can explicitly declare this sort of information e.g.

<uap3:Extension
    Category="windows.appExtension">
    <uap3:AppExtension
        Name="com.microsoft.powershell.completer"
        Id="winget.completer"
        PublicFolder="public"
        DisplayName="WinGet Completer">
        <uap3:Properties>
            <Tab-Completers>
                <Tab-Completer Type="Powershell" Name="foo" File="foo\bar.s1" />
                <Tab-Completer Type="WinRT" Name="blah" ActivatableClassId="blah.de.blah.de.blah" Load="startup"/>
                <Tab-Completer Type="DLLExport" Name="meh" File="in\proc.dll" Function="CallMe" Discovery="auto"/>
            </Tab-Completers>
        </uap3:Properties>
</uap3:Extension>

The data under <uap3:Properties> can be as elaborate as you need and desire.

This model can securely provide explicit details and avoid ambiguitities (No startup folder? Is that intentional? Accidentally deleted? CHKDSK removed it? Malware tampered?).

This also offers better performance than walking the filesystem and reading/parsing files to determine what to do.

And it ensures clean behavior on uninstall - packages registered for the user are discoverable and cleanly become invisible when the package is deregistered for the user. No concerns about incomplete uninstalls or manual hackery or otherwise 'winrot' leaving the system in inconsistent states.

Copy link
Copy Markdown

@DrusTheAxe DrusTheAxe May 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW if you need to have an ordered list of packaged extensions the common solutions are implicit-ordering-by-name and explicit-ordering.

AppExtensionCatalog etc enumerate packages but the results are an unordered list. If you're concerned about collisions then searching in a list needs a deterministic way to avoid collisions if possible and mitigate/control impact when not. Couple of common options:

1. Implicit Order

You can order the list based on values in the data, e.g. given

<uap3:Extension
    Category="windows.appExtension">
    <uap3:AppExtension
        Name="com.microsoft.powershell.completer"
        Id="winget.completer"
...

you can build a list of tab-completers sorted by [Id, PackageFullName] (not just name as that's not guaranteed unique across packages).

One example is how MSIX' package graph orders Resource packages for a given Main package (alphabetically by the Resource package's Name).

One downside is this can lead to perverse games by not-good-natured actors, e.g. Id="AAAAAAAA.winget.completer" cause Me! Me! Me! I want to appear at the top of lists causes I'm so important...

2. Explicit Order

A common alternative is for the extension to explicitly tell you a hint where they'd like to appear in the search order e.g. Rank in the example

<uap3:Extension
    Category="windows.appExtension">
    <uap3:AppExtension
        ..
        <uap3:Properties>
            <Tab-Completers>
                <Tab-Completer Type="Powershell" Name="foo" Rank="123" File="foo\bar.s1" />
                <Tab-Completer Type="WinRT" Name="blah" ActivatableClassId="blah.de.blah.de.blah" Load="startup"/>
                <Tab-Completer Type="DLLExport" Name="meh" Rank="-500" File="in\proc.dll" Function="CallMe" Discovery="auto"/>
            </Tab-Completers>
        </uap3:Properties>
</uap3:Extension>

where Rank=integer, default=0, and the end list is sorted small to large (-infinity...0...+infinity).

MSIX does this via AddPackageDependency() as one example. Visual C++ did a similar thing via a #pragma where you could specify when in the order of global constructors a symbol? .obj? should appear.

Copy link
Copy Markdown

@DrusTheAxe DrusTheAxe May 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Of course this data is how package authors define their content. Powershell could provide overrides if you so choose, e.g. if my profile has

$tab_completers_override = []{ "winget.completer" = []{ "rank"=100 } }

where you enumerate package extension info and selectively override data with override values (if any). This makes it easy for devs to DoTheirThing(TM) and ItJustWorks(TM) with the user/admin still in control to override devs if need be (e.g. "Yes foo is defined to appear in the ordered list before bar but that's problematic for me so force foo to appear later, 'cause, I say so. My machine, my rules. root administrator owns the machine" 🙂)

In such cases 'overrides' are a rarely used but invaluable option when needed. Mostly things just work but when there's a complication there's a (simple and reasonable) way around it.

Food for thought.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ping

Been a while. Any update on this concept to share here at this time?

@daxian-dbw
Copy link
Copy Markdown
Member Author

daxian-dbw commented Apr 29, 2025

@iSazonov There are mainly 2 reasons for not choosing to extend module manifest for the auto-discovery/auto-loading for completers, feedback providers, and potentially predictors.

  1. The completer/feedback provider/predictor are not necessarily backed by a fully-fledged module. It could be a .ps1 file for a native command completer, or a single dll served as a bare binary module. In those cases, there is no fully-fledged module existing in module path. Steve also described this in the related issue:

    the intent is to enable apps/tools that DON'T have a PowerShell module, they are native commands that want PowerShell integration. We have some partners asking for this, so this isn't a hypothetical situation.

  2. We want to make it easy for users to
    • discover what are deployed for auto-discover/loading (eagerly or lazily);
    • disable auto-discovery/loading for a particular native command easily.

There are other reasons too, such as

  • Depending on updating module manifest means existing modules cannot participate in this new feature, while the described design allows users to quickly enable auto-discovery/load for existing completer/feedback provider/predictor modules.
  • Complexity and risk of touching the module auto-loading code path. e.g. the module analysis cache and code would need to be updated to accommodate the changes.

@iSazonov
Copy link
Copy Markdown
Contributor

@daxian-dbw Thanks for the clarifications! I'm somewhat discouraged as I was expecting arguments like "it's impossible for security reasons" or "it's technically impossible".

I was convinced that modules are the preferred distribution method for public use. The manifests were designed just for convenience. Therefore, I am perplexed that significant efforts will be directed not at expanding the capabilities of the modules, but at creating just another config/plug-in model.

I guess the only reason to create new code is if you need to do it quickly and avoid regression. Although I don't see how the implementation of these requirements would affect the existing module functionality.

Also, I don't see the need to enable and disable on the fly. Since we are talking about an interactive session, the user installs the module if he needs it and deletes it otherwise in seconds. It's always been like this and it wasn't a problem. Or are you talking about a public pre-prepared environment where the user does not have any rights at all, then in any case he will not be able to disable any feature. If there are some rights, it means that it can save the desired state or configuration. In this case, why not make the solution more general and allow it to enable and disable loading of any module? The costs of implementation are the same, the benefits are greater.

Depending on updating module manifest means existing modules cannot participate in this new feature, while the describe design allows users to quickly enable auto-discovery/load for existing completer/feedback provider/predictor modules.

I don't understand this argument. Creating or updating a manifest file is not a problem at all.

Complexity and risk of touching the module auto-loading code path. e.g. the module analysis cache and code would need to be updated to accommodate the changes.

If we talk about my proposal, then there is only a naming convention, i.e. this code does not change.
Although I understand you, if the goal is to avoid the risk of regression and create a new one quickly. I believe that this is the main factor.

@daxian-dbw
Copy link
Copy Markdown
Member Author

daxian-dbw commented Apr 30, 2025

I don't understand this argument. Creating or updating a manifest file is not a problem at all.

@iSazonov not all existing modules will be updated, and we cannot ask users to update a module locally that they don't own.

I don't see the need to enable and disable on the fly. Since we are talking about an interactive session, the user installs the module if he needs it and deletes it otherwise in seconds.

It's very possible that a user needs the module, which offers more than completer/feedback provider/predictor, but just don't want the module to participate in auto-discovery/auto-loading for tab completion/feedback provider etc.

if the goal is to avoid the risk of regression and create a new one quickly. I believe that this is the main factor.

The main reasons and some other concerns are listed out clearly in my last comment. You can read again to make sure you don't miss any points.

@iSazonov
Copy link
Copy Markdown
Contributor

iSazonov commented May 1, 2025

I don't understand this argument. Creating or updating a manifest file is not a problem at all.

@iSazonov not all existing modules will be updated, and we cannot ask users to update a module locally that they don't own.

Any popular project will do this immediately without a doubt. If a project is frozen and cannot be officially updated, users can always use the traditional profile approach. Many requests for new features have been rejected by the team precisely because there is a simple workaround.

I don't see the need to enable and disable on the fly. Since we are talking about an interactive session, the user installs the module if he needs it and deletes it otherwise in seconds.

It's very possible that a user needs the module, which offers more than completer/feedback provider/predictor, but just don't want the module to participate in auto-discovery/auto-loading for tab completion/feedback provider etc.

But auto-upload has been working exactly like this for all these years and there have not been many requests for changes.
If there is such a need, then let's expand the possibilities in a more general way. Why can the user turn off the autoload of a completer, but cannot turn off the autoload of aliases, functions or cmdlets, or specify the exact list of what he needs to autoload from the module? This has been supported for a long time using the Import-Module command explicitly. And it's easy to implement for autoloading.

if the goal is to avoid the risk of regression and create a new one quickly. I believe that this is the main factor.

The main reasons and some other concerns are listed out clearly in my last comment. You can read again to make sure you don't miss any points.

It's just that these arguments didn't convince me that it was necessary to create something completely separate. I believe that expanding existing capabilities and creating a more general solution will bring more benefits.
And I am strongly convinced that the rejection of the manifests is causing a lot of problems in the future.
At the same time, manifests provide incredible opportunities for new features.
For example, PowerShell could find git.psd1 next to git.exe and autoload completer (and more). Or it could find git.psd1 in user folder as custom config and filter out entities from standard git.psd1.
Of course, none of this makes sense if the team is forced to choose a short-term solution for the sake of implementation simplicity.

@MatejKafka
Copy link
Copy Markdown

MatejKafka commented Oct 29, 2025

This started as a general comment on the RFC, but grew into almost a counter-proposal as I was writing it, not sure if a comment is the right format.

I'm excited that there's an RFC, huge thanks for pushing this forward. :) Overall, I feel like the proposal makes sense and it's in a very good shape, but I'm somewhat worried that what it proposes is a local optimum, not a global one. Below, I describe problems I see both with the current state and this RFC and alternative solutions I see.


Even with this RFC, a command author needs to provide completions for each shell separately, in the form of an executable script, and place the completion script in the right directory for the script. This adds extra burden for multiple parties:

  1. The author needs to write/generate the completion script separately for each shell.

  2. The author of the installer needs to find the right place for all completion scripts. For PowerShell specifically, this is problematic due to Move PS content out of OneDrive #388, since there will be at least two places to check for, and the RFC intends to support custom paths as well. For programs that are installed by just unzipping an archive somewhere and adding it to PATH, this is not possible.

    As an added pain point, having a single directory where all programs drop their completion scripts is unfortunate for installers, since if there are multiple programs with the same command name, they will keep stepping on each other's files on every update. Also, I'm not sure how mainstream installers (MSI, InnoSetup, NSIS,...) and updaters will react to manual changes to the installed completion manifest – e.g., I wouldn't be surprised if application updates accidentally overwrote the enabled key to true on every update, since that's the original content of the file.

  3. If there are multiple different commands with the same name, or multiple completers for a single command, PowerShell will have to resolve the conflict somehow, with unclear semantics.

  4. PowerShell package managers (primarily PowerShellGet) will most likely need to add explicit support for completers, since the structure and installation path is different from modules and scripts.

  5. Lesser-known shells have no reasonable way to tap into the existing ecosystem of completions, other than shelling out to another supported shell. This is currently also the case for PowerShell on non-Windows platforms.

  6. Many completion scripts will just call the command itself with a specific flag to retrieve the completions. This may potentially be a slow operation, that will have to be done on each completion request from the user.

I see a few general solutions to the listed problems:

Completion lookup

Place the completion script somewhere near the binary, not in a PowerShell-specific location. I have multiple ideas on how to make this work (e.g., the binary could embed the completion script as a resource), but the one I like the most is to look for a <CommandName>.ps1completion file on PATH, e.g. git.ps1completion (or git.exe.ps1completion, don't have a strong preference).

Since command lookup is also done through PATH, this means that for the command author, it is enough to place the file next to the binary and completions will auto-magically work. At the same time, others can provide completions for packages they did not author, just by ensuring that the completion file is on PATH. Conflict resolution should be mostly for free, since there are established conventions around how PATH lookup is done.

This also feels more friendly towards MSIX applications, although the exact loading mechanism would likely have to be adapted, since the completion script probably cannot be exposed as an app alias. I'm not sufficiently knowledgeable about MSIX internals to comment on this.

Static completion format

Instead of a completion script, standardize a mostly static declarative completion format that describes the structure of the command (parameter format, parameter names and descriptions, subcommands,...). Since most CLI libraries are configured in a static way (e.g., with an annotated class/struct), a format that covers the vast majority of common apps is imo feasible, and not overly complex (I've quickly gone through the most popular CLI libraries to verify this, I'd need a deeper analysis to give a more educated guess).

For the remaining apps (e.g., ffmpeg with its filter syntax), the format could provide hooks for specific parts of the metadata; i.e., "if you want to get parameter info for this subcommand, call this executable with these parameters". The shell could then decide whether it wants to call the hook, or just use the static portion of the metadata.

I'm fully aware that this is a much more complex proposal than this RFC, but similarly to how VS Code standardized LSP, PowerShell could solve this problem once and given the size of the project, have a chance of reaching the critical adoption mass to make this an accepted standard.

Aside: Based on a random mention on lobste.rs, I understand that someone on the .NET team was experimenting with something similar to this, but I have no closer information, I'll ask around internally. :)

Module-based implementation

An alternative option (both to the RFC and the PATH-based lookup proposed above) would be to re-use modules for this, as @iSazonov already suggested above, although I prefer the above proposals. I feel like the specific pain points mentioned in the response are all solvable, and it avoids adding yet another place where PowerShell looks for scripts, while providing the completion authors with more flexibility:

  1. The completer/feedback provider/predictor are not necessarily backed by a fully-fledged module. It could be a .ps1 file for a native command completer, or a single dll served as a bare binary module. In those cases, there is no fully-fledged module existing in module path. Steve also described this in the related issue:

If an installer can add the completion script and a manifest to a directory, it doesn't seem significantly harder to add a module to the main Modules folder. The manifest entry listing the provided completers could also include a specific script inside the module to invoke, without auto-loading the whole module, to alleviate performance concerns for modules that provide many completers.

  1. We want to make it easy for users to
  • discover what are deployed for auto-discover/loading (eagerly or lazily);

I think we should surface this in PowerShell anyway (Get-ArgumentCompleter? new property on CommandInfo?), instead of asking users to look through a directory.

  • disable auto-discovery/loading for a particular native command easily.

I'm probably missing something, but I don't see why this would be significantly different for a module as opposed to a dedicated manifest in a different directory.

There are other reasons too, such as

  • Depending on updating module manifest means existing modules cannot participate in this new feature, while the described design allows users to quickly enable auto-discovery/load for existing completer/feedback provider/predictor modules.

A user can provide their own module that declares the completers and import another module, without waiting for the author of another module actually providing the completions to update it.

  • Complexity and risk of touching the module auto-loading code path. e.g. the module analysis cache and code would need to be updated to accommodate the changes.

That's a good point, and I'm not qualified to comment on it. However, it feels like an implementation of the RFC will need to handle many of the same pain points that are already handled for module loading.


The folders for feedback providers and tab-completers will be placed under the same path where modules folders are currently located:

- In-box path: `$PSHOME/feedbacks` and `$PSHOME/completions`
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Shouldn't the directories use CamelCase names, to match all the other dirs? (Modules, Scripts,...)

6. How about on a System Lockdown Mode (SLM) or Restricted remoting environments?
- We use `.psd1` file for metadata, which can be signed if needed.

### Unified Location for load-at-startup Configurations
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree with this (sub)proposal. In my view, if you have a completer that needs to run on startup, there's no longer anything specific to completers in the loading process, and we should instead have a general mechanism for adding startup scripts that doesn't involve patching $PROFILE.

Among other reasons, if we add a _startup_ folder for completers, I'm quite sure that some clever application dev will go "ha, I can use that for general purpose startup hooks". :)

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One possible implementation would be to allow module manifests to specify that the module should be loading during startup. That way, we could just reuse the existing module infrastructure instead of defining yet another mechanism for loading and executing code.

- All modules or scripts that need to be processed at session startup should have configurations deployed in the `startup` folder.

Each item within `startup` is a folder, whose name should be the friendly name of the component, e.g. `"UnixTabCompletion"`.
Within each sub-folder, a `.psd1` file named after the folder name should be defined to configure the auto-discovery of the component.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I mentioned in a comment above, this sounds exactly like the Modules folder, except with a different manifest structure. I'd prefer to reuse modules for this rather than defining another mechanism.

but I don't want to mess with user's profile to make the feedback provider or tab-completer discoverable.

As a user, I want my feedback providers and tab-completers for the specific native commands to be loaded lazily,
instead of having to use my profile to load them at session startup.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are testing cases where you may want to disable this as well as showcasing the feature whilst training people on using PowerShell.

Can we get a mechanism to disable this via another parameter on pwsh, an entry in the powershell.config.json file or both?

Can we also get some data around the difference in user experience for lazyloading in terms of delays caused by waiting to load it vs having already pre-loaded these.
I don't expect in many cases the difference to be huge, but would be very much worth having (especially ran on older lower powered & locked down devices)

I ask as I feel that there are cases where it makes more sense for users to make use Session specific profile loading in some cases like I do with my Minimal Profile vs vs many using a 1 Profile to rule it all approach.


For PowerShell commands (Function or Cmdlet), I presume the completion or feedback support,
if there is any, will be from the same module.
So, when the command becomes available, the feedback provider and/or tab-completer will become available too.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the ideal situation - however many modules, especially those included in Windows, don't provide this and ends up being made available either via a 3rd party module or via a users profile.

Comment on lines +29 to +32
- A tool can deploy its feedback provider and/or tab-completer without needing to update a file at a central location, such as the user's profile.
- A tool can remove its deployment cleanly without needing to update a file at a central location.
- PowerShell can discover feedback providers and tab-completers automatically, and load one based on the right trigger.
- A user can enable or disable the auto-discovery for a feedback provider or tab-completer.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add a goal to allow the user the option of pre-load or lazily-load as there will be times when it is time intensive where pre-loading is more suitable than lazy loading?

The proposal is to adopt the existing mechnism used in Bash, Zsh and Fish's completion system --
have directories contain individual completion scripts for various commands and applications,
whose file names should match names of the commands.
Those completion scripts are loaded only when their corresponding commands' completion is triggered for the first time.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would be true in a lazy load scenario, but not in a preload scenario

Comment on lines +127 to +128
1. Should we add another key to indicate the target OS?
- A feedback provider may only work on a specific OS, such as the `"WinGet CommandNotFound"` feedback provider only works on Windows.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would require an update to the module manifest, would it not? Or is that what you are proposing here?

- A feedback provider may only work on a specific OS, such as the `"WinGet CommandNotFound"` feedback provider only works on Windows.
- Such a key could be handy if a user wants to share the feedback/tab-completer configurations among multiple machines via a cloud drive.

2. Do we really need a folder for each feedback provider?
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally not

Comment on lines +230 to +231
2. Shall we disable the feature with the `-noninteractive` flag?
- `PSReadLine` is disabled when this flag is specified, so maybe this feature should be disabled too.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense

PublicFolder="public"
DisplayName="WinGet PowerShell Resources">
<uap3:Properties>
<Completer Type="Script" Name="winget.completer" File="assets\winget.completer.s1" />
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
<Completer Type="Script" Name="winget.completer" File="assets\winget.completer.s1" />
<Completer Type="Script" Name="winget.completer" File="assets\winget.completer.ps1" />

@tstager
Copy link
Copy Markdown

tstager commented Apr 28, 2026

I wrote a module that manages imports and exports of native tab completion scripts that allows you to add and remove them from the current session. I've been testing it for a few months and it solves some of this problem at least until a native solution is implemented. Its on the gallery called CompleterActions.

There is no way to auto-discover a feedback provider or a tab-completer for a specific native command.

As a tool author, I want to provide a feedback provider or a tab-completer along with my tool installation,
but I don't want to mess with user's profile to make the feedback provider or tab-completer discoverable.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this same solution also handle scripts and modules? Or extended to do so?

whose file names should match names of the commands.
Those completion scripts are loaded only when their corresponding commands' completion is triggered for the first time.

We will have separate directories for feedback providers and tab-completers, for 2 reasons:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will make it not possible for an Appx app to participate since they can't place files outside of their own installed folder

They could, but it's strongly discouraged. We've done it before in rare cases and they've all gone sideways griefing devs, users and us. They also degrade MSIX's security and integrity models. So, yeah, we prefer not to go here w/o suuuuper high business justification, and even then it usually needs a huge backwards compatibility element.

I don't see sufficient justification here, especially not with Powershell under active development. There are better answers.

so for a predictor to be auto-discovered, it has to be loaded at the startup of an interactive session.

Given that, maybe it's better to have a unified location for all the load-at-startup configurations:

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ping

Been a while. Any update on this concept to share here at this time?


### Expose resources with App Extension

MSIX' standard recommendation is for the packaged app to declare an `appExtension` in its manifest.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or a packageExtension, as of 10.0.26100.0 (aka Ge)

<uap18:Extension Category="windows.packageExtension">
  <uap3:PackageExtension...>
    ...

Similar to appExtension but other than app->package naming, it can appear in any package (windows.appExtension can only appear in Main+Optional packages)

If you don't otherwise have an application in your package (e.g. "I just want *.ps1, completers and other Powershell plugins/extenders") you can use windows.appExtension. Possible, but unnecessary friction so why bother.


There are 2 blockers in this hypothetical discovery process:
- `wt.exe` is essentially a reparse point. The API to get its target is undocumented. PowerShell used to detect its target but then reverted due to the undocumented API (see [PR#10331] and [PR#16044]).
**What is the suggested way to find the Appx/MSIX package that owns an app execution alias?**
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Grubbing through the filesystem to discover the associated package is fragile - in the 'reliability' and 'security' perspectives. Not recommended.

Likewise, a reverse lookup enumerating every package to determine its filesystem artifacts, and then (correctly) know when the latter is about to be used is also fragile in reliability and security, and poses additional perf costs. Not recommended.

windows.packageExtension (like windows.appExtension) was invented for this very reason - to 'discover' what packages offer a desired functionality, and to do so in a reliable, secure and efficient manner.

2. The Windows 11's implementation targets Windows 11, version 22H2 (10.0; Build 22621), and later.
It only provides C and C++ functions, no WinRT types, unlike the Windows App SDK's implementation.

3. I believe the Windows 11's implementation will provide WinRT types eventually.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's currently no plans or intentions to do so. WinAppSDK's WinRT types with Mdd*() passthrough to the OS APIs meet all the known needs not otherwise met by directly calling the OS Win32/C API.

If you find a scenario where this is insufficient please let us know.

Then at startup, we get all extensions with that namespace and load all of them.
- But, would that be performant enough to do at the startup?

2. How to resolve an App Execution Alias to the actual app package? (undocumented API)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is currently not supported, and I'm highly doubtful it's viable or desirable, for the reasons previously stated.


2. How to resolve an App Execution Alias to the actual app package? (undocumented API)

3. How to get the `AppExtension` information from a specific app package? (without parsing the app manifest ourselves)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you're right, the current API doesn't provide this today.

You can emulate it (as noted above), but an API doing this could be more efficient. If this interests you please let us know.

(IMO this would seem a worthy enhancement. But devil's in the details of course)

3. How to get the `AppExtension` information from a specific app package? (without parsing the app manifest ourselves)

4. How to allow user to disable the auto-discovery/loading of PowerShell resources from a particular Appx/MSIX app?
- Shall we have configurations for indivual Appx/MSIX apps only for the purpose of disabling?
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The 'allow/deny list' concept.

One way is like AppExecutionAlias in Settings, the user can individually control which are enabled or disabled (and similar knobs for administrators to control/override). Upsides to Settings management but that may not be feasible here (for technical and/or non-technical reasons). Alternatively, Powershell tracking allow/deny is simple enough e.g.

bool isEnabled = ApplicationData.LocalSettings["MSIX"]["Packages"][packagefamilyname]["Enabled"];
...
ApplicationData.LocalSettings["MSIX"]["Packages"][packagefamilyname]["Enabled"] = false;

5. How to cache the information that an Appx/MSIX package doesn't have any PS resources declared and avoid the auto-discovery for it when user is using the tool?

Given that supporting auto-discovery/loading for Appx/MSIX packages is completely different from other applications,
It warrants another RFC to design for it specifically.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got link?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.