Draft spec for auto-discovering feedback provider and tab-completer#386
Draft spec for auto-discovering feedback provider and tab-completer#386daxian-dbw wants to merge 11 commits intoPowerShell:masterfrom
Conversation
|
|
||
| Today, to enable a feedback provider or a tab-completer for a native command, | ||
| a user has to import a module or run a script manually, or do that in their profile. | ||
| There is no way to auto-discover a feedback provider or a tab-completer for a specific native command. |
There was a problem hiding this comment.
Should we consider predictors in this set of supported auto discoverable "things"?
There was a problem hiding this comment.
For example the completion predictor may be good to auto register since it doesnt really have any commands
There was a problem hiding this comment.
There won't be a specific trigger for any predictor, so if needed, a predictor module can just be placed under the _startup_ folder of Feedbacks or Completions.
Think about it more, I guess we may want to unify the _startup_ folders from feedbacks and completions, since they all will be loaded at startup. Maybe we should have a startup folder at the same level of feedbacks and completions, so any predictor/feedback provider/tab-completer that needs to be loaded at session startup can be put there.
|
|
||
| 2. Should we add another key to indicate the target OS? | ||
| - A feedback provider may only work on a specific OS, such as the `"WinGet CommandNotFound"` feedback provider only works on Windows. | ||
| - Such a key could be handy if a user wants to share the feedback/tab-completer configurations among multiple machines via a cloud drive. |
There was a problem hiding this comment.
I think this could be good, thinking about the linux cmd-not-found predictor. Also if a user wants to bring this configuration across their different machines they dont need to have multiple different folder structures? I am not entirely sure how they would necessarily do it but I know some community folks use external tools to share their $PROFILE across machines.
There was a problem hiding this comment.
they dont need to have multiple different folder structures?
Folder structures under feedbacks and completions are the same. I guess a user can create a symbolic link to make <personal>/powershell/feedbacks or <personal>/powershell/completions points to the folders from a cloud drive.
| - A feedback provider may only work on a specific OS, such as the `"WinGet CommandNotFound"` feedback provider only works on Windows. | ||
| - Such a key could be handy if a user wants to share the feedback/tab-completer configurations among multiple machines via a cloud drive. | ||
|
|
||
| 3. Do we really need a folder for each feedback provider? |
There was a problem hiding this comment.
Is. there a reason why this all can't be in a single json file? i.e feedbackproviders.json?
There was a problem hiding this comment.
Tool installation needs to easily deploy/remove the auto-discovery configuration. We want to avoid updating a single file to make that easy.
|
I see the key elements here - autodiscover, autoload, and trigger. One option was suggested by me a few years ago PowerShell/PowerShell#13428 For example, if we introduce a naming standard, we can put If we need a Everything for modules is familiar and understandable to users and it can be implemented in small steps. |
| whose file names should match names of the commands. | ||
| Those completion scripts are loaded only when their corresponding commands' completion is triggered for the first time. | ||
|
|
||
| We will have separate directories for feedback providers and tab-completers, for 2 reasons: |
There was a problem hiding this comment.
This will make it not possible for an Appx app to participate since they can't place files outside of their own installed folder
There was a problem hiding this comment.
I mentioned Appx and MSIX packages in the "Discussion Points" below. I think for those apps:
- If they want to provide tab-completer, then the tab-completer needs to be exposed by running the tool with a special flag, such as
<tool> --ps-completion, then the user can manually create the "deployment+configuration" for the tool using the output script. Or, even better, the user can just run the output script, which will create the deployment automatically. - If they want to provide feedback provider, I'm not sure how that will be possible. According to Add a way to lazily autoload argument completers PowerShell#17283 (comment), the tool's DLL should not be used by another process, but feedback provider and predictor are binary implementation only.
There was a problem hiding this comment.
@daxian-dbw I believe this has been resolved so that we can enable Appx packaged commands to expose PS integration, can you update?
There was a problem hiding this comment.
This will make it not possible for an Appx app to participate since they can't place files outside of their own installed folder
They could, but it's strongly discouraged. We've done it before in rare cases and they've all gone sideways griefing devs, users and us. They also degrade MSIX's security and integrity models. So, yeah, we prefer not to go here w/o suuuuper high business justification, and even then it usually needs a huge backwards compatibility element.
I don't see sufficient justification here, especially not with Powershell under active development. There are better answers.
Co-authored-by: Steve Lee <slee@microsoft.com>
Co-authored-by: Travis Plunk <travis.plunk@microsoft.com>
Co-authored-by: Travis Plunk <travis.plunk@microsoft.com>
|
Possibly silly question, particularly at this late stage, but as a DLL can contain a bunch of information for how to access it through PowerShell, and in .Net an exe is just another assembly, why not have it be queryable for what its capabilities are? Or am I missing something obvious? |
I am! This is for entirely non-PowerShell commands which we want to have autocompletion for. Sorry, my misunderstanding. |
| if there is any, will be from the same module. | ||
| So, when the command becomes available, the feedback provider and/or tab-completer will become available too. | ||
|
|
||
| Therefore, the auto-discovery for feedback provider and tab-completer targets native commands only. |
There was a problem hiding this comment.
Just for the record, there's been a lot of discussion in Discord chat where people are going so far as to use dynamic parameters just to try and create argument completion for standalone scripts. It might be worth considering them as well.
There was a problem hiding this comment.
Is that motivated by the inability to define custom classes/enums and dynamic completers in scripts before the parameter block, or am I misunderstanding the request? If so, I feel like a better solution would be to lift that limitation, instead of providing a way to define an external completer.
| ```powershell | ||
| @{ | ||
| module = '<module-name-or-path>[@<version>]', # Module to load to register the feedback provider. | ||
| arguments = @('<arg1>', '<arg2>'), # Optional arguments for module loading. | ||
| disable = $false, # Control whether auto-discovery should find this feedback provider. | ||
| } |
There was a problem hiding this comment.
[discussion point] With this design, a use can specify only 1 feedback provider for a native command.
But, would there be any reason we want to allow multiple feedback providers for a single native command?
Note that it's not possible to have multiple completers for a single native command because it's 1-to-1 mapping in native completer table, unless the user can provide a custom completer script block that aggregate the completion results from multiple tab completers.
Given that we don't support it for native completers, I don't think we should support it for feedback provider either.
There was a problem hiding this comment.
I honestly don't want to see "feedback providers" included in this. Who is asking for that? What's the use case?
I say this not just because I have little interest in installing "things" that allow a third-party to be called each time I use a tool, but because I hate to see the team spending more time on a feature which, as far as I can tell, hasn't been used by anyone but WinGet and @JustinGrote since it was shipped 😉
There was a problem hiding this comment.
I'd say you have a fair point :)
If we only care about lazy loading of tab completion for native executables, maybe https://github.com/daxian-dbw/PSNativeToolCompletion is sufficient. It depends on the 'fall-back' completer change introduced in 7.6.
| @{ | ||
| module = '<module-name-or-path>[@<version>]', # Module to load to register the completer. | ||
| script = '<script-path>', # Script to run to register the completer. | ||
| arguments = @('<arg1>', '<arg2>'), # Optional arguments for module loading or script invocation. |
There was a problem hiding this comment.
[Discussion point] Do we really need the arguments key?
I personally think it provides additional flexibility. Any concerns if we keep this key?
| Different discussions: | ||
| 1. Do we really need a folder for each feedback provider? | ||
| - [dongbo] Yes, I think we need. | ||
| Appx and MSIX packages on Windows have [many constraints](https://github.com/PowerShell/PowerShell/issues/17283#issuecomment-1522133126) that make it difficult to integrate with a broader plugin ecosystem. The way for such an Appx/MSIX tool to expose tab-completer could be just running the tool with a special flag, such as `<tool> --ps-completion`, |
There was a problem hiding this comment.
[Discussion point] Appx/MSIX package cannot integrate with a broader plugin ecosystem easily [reference]:
- An MSIX package cannot change
$PATH- An MSIX package cannot set a global system environment variable of any sort on install
- An MSIX package cannot write to a user folder (or a system folder outside of its package root) on install.
Additional constraints that make it difficult to integrate Appx/MSIX with a broader plugin ecosystem:
- MSIX packages are "sealed": you're not supposed to be able to poke at their insides without explicit permission
- Packages are not designed to have their DLLs used directly in-proc from another process, because that may break the updatability guarantees around files inside the package not being used.
We should support Appx/MSIX packages without requiring user's manual work. But, how to achieve it given the constrains?
There was a problem hiding this comment.
That's trivial. An extensibility model has 2 requirements
- Discovery
- Access
Both have multiple options with the simplest being Discovery=AppExtension and Access=DynamicDependencies. More details below.
Discovery
A Powershell extension (be it completer or other scenarios) needs to advertise it exists such that Powershell can discover it. MSIX' standard recommendation is for the packaged app to declare an appextension in its manifest and Powershell uses the AppExtensionCatalog API to enumerate these extensions and any needed info about them.
Example snippet in appxmanifest.xml
<uap3:Extension
Category="windows.appExtension">
<uap3:AppExtension
Name="com.microsoft.powershell.completer"
Id="winget.completer"
PublicFolder="public"
DisplayName="WinGet Completer"/>
</uap3:Extension>
and sample enumeration
var catalog = new AppExtensionCatalog.Open("com.microsoft.powershell.completer")
foreach (var extension in catalog)
{
...use it...
}
Access
Now that you know a package has Powershell extension(s) of interest you need to access file(s) in the package. This requires Windows knows Powershell is using a package's content. The simplest solution is to use the Dynamic Dependencies API to tell Windows you're using the package's content. This ensures Windows doesn't try to service the package (e.g. updating or removing the package) while in use. It also adds the package to the Powershell process' package graph, making its resources available for LoadLibrary, ActivateInstance and other APIs.
Recommended But Not Only
These are the commonly recommended solutions for Discovery and Access but there are other options.
Discovery Alternatives
For access the windows.packageExtension can be a good alternative. The windows.appExtension extension has app-scope i.e. you must declare it under an <Application> and thus can only be defined in Main or Optional packages. windows.packageExtension which has package-scope (no app required) and can be declared in any package (including Resource and Framework packages) and use the PackageExtensionCatalog API to enumerate them. This is new in 24H2. The recommendation if you need downlevel support is to use AppExtension (supported since ~RS1) or use both. The latter's advantage is it supports all package types and simpler developer experience in some cases. Using both provides best experience while also providing downlevel support.
Access Alternatives
Dynamic Dependencies enables you to use several tech to interact with a packaged Powershell extension, e.g. WinRT inproc and out-of-proc (ActivateInstance), inproc DLLs (LoadLibrary), inproc .NET assemblies (Assembly.Load*()), read a text file (CreateFile(GetPackagePath()+"\foo.ps1") and other like tech.
NOTE: Inproc is generally discouraged for extensibility models given the reliability and security implications. Technically feasible of course as there are times it's necessary (especially older legacy software), but if you can do better, do better :-)
There are other options if you need code a package -- Packaged COM OOP servers, AppServices, etc. Some don't require Dynamic Dependencies but bring their other caveats to the table so while supported the Dynamic Dependencies options are recommended.
There was a problem hiding this comment.
And note, the MSIX discovery+access recommendations are easily secure. Package content is strongly secure and verifiably trustable. When Powershell uses a package's content it can be assured the content is authentic and unmodified from the developer.
Package authors can be Store signed or declare
<uap10:PackageIntegrity>
<uap10:Content Enforcement="on"/>
</uap10:PackageIntegrity>
and its content is hardened even a process running as LocalSystem can't alter the package's files. Powershell can use various APIs to confirm the package is signed and is unmodified e.g. Package.SignatureKind, Package.Status.VerifyIsOK(), Package.VerifyContentIntegrityAsync(). This can jive well with Powershell's ExecutionPolicy options.
|
@daxian-dbw Could you please explain why PowerShell module infrastructure can not be used/adopted for auto-discovering feedback providers and tab-completers? All we need to do is
We can use standard psd1 module manifest to expose metadata. For compatibility reasons, we cannot add a new keyword ( Such function could return IArgumentCompleter/IFeedbackProvider and/or register the entity. This provides, with minimal effort, almost all the needs that we expect - auto-discovering, lazy loading, signing, versioning and so on. |
| Different discussions: | ||
| 1. Do we really need a folder for each feedback provider? | ||
| - [dongbo] Yes, I think we need. | ||
| Appx and MSIX packages on Windows have [many constraints](https://github.com/PowerShell/PowerShell/issues/17283#issuecomment-1522133126) that make it difficult to integrate with a broader plugin ecosystem. The way for such an Appx/MSIX tool to expose tab-completer could be just running the tool with a special flag, such as `<tool> --ps-completion`, |
There was a problem hiding this comment.
That's trivial. An extensibility model has 2 requirements
- Discovery
- Access
Both have multiple options with the simplest being Discovery=AppExtension and Access=DynamicDependencies. More details below.
Discovery
A Powershell extension (be it completer or other scenarios) needs to advertise it exists such that Powershell can discover it. MSIX' standard recommendation is for the packaged app to declare an appextension in its manifest and Powershell uses the AppExtensionCatalog API to enumerate these extensions and any needed info about them.
Example snippet in appxmanifest.xml
<uap3:Extension
Category="windows.appExtension">
<uap3:AppExtension
Name="com.microsoft.powershell.completer"
Id="winget.completer"
PublicFolder="public"
DisplayName="WinGet Completer"/>
</uap3:Extension>
and sample enumeration
var catalog = new AppExtensionCatalog.Open("com.microsoft.powershell.completer")
foreach (var extension in catalog)
{
...use it...
}
Access
Now that you know a package has Powershell extension(s) of interest you need to access file(s) in the package. This requires Windows knows Powershell is using a package's content. The simplest solution is to use the Dynamic Dependencies API to tell Windows you're using the package's content. This ensures Windows doesn't try to service the package (e.g. updating or removing the package) while in use. It also adds the package to the Powershell process' package graph, making its resources available for LoadLibrary, ActivateInstance and other APIs.
Recommended But Not Only
These are the commonly recommended solutions for Discovery and Access but there are other options.
Discovery Alternatives
For access the windows.packageExtension can be a good alternative. The windows.appExtension extension has app-scope i.e. you must declare it under an <Application> and thus can only be defined in Main or Optional packages. windows.packageExtension which has package-scope (no app required) and can be declared in any package (including Resource and Framework packages) and use the PackageExtensionCatalog API to enumerate them. This is new in 24H2. The recommendation if you need downlevel support is to use AppExtension (supported since ~RS1) or use both. The latter's advantage is it supports all package types and simpler developer experience in some cases. Using both provides best experience while also providing downlevel support.
Access Alternatives
Dynamic Dependencies enables you to use several tech to interact with a packaged Powershell extension, e.g. WinRT inproc and out-of-proc (ActivateInstance), inproc DLLs (LoadLibrary), inproc .NET assemblies (Assembly.Load*()), read a text file (CreateFile(GetPackagePath()+"\foo.ps1") and other like tech.
NOTE: Inproc is generally discouraged for extensibility models given the reliability and security implications. Technically feasible of course as there are times it's necessary (especially older legacy software), but if you can do better, do better :-)
There are other options if you need code a package -- Packaged COM OOP servers, AppServices, etc. Some don't require Dynamic Dependencies but bring their other caveats to the table so while supported the Dynamic Dependencies options are recommended.
| Different discussions: | ||
| 1. Do we really need a folder for each feedback provider? | ||
| - [dongbo] Yes, I think we need. | ||
| Appx and MSIX packages on Windows have [many constraints](https://github.com/PowerShell/PowerShell/issues/17283#issuecomment-1522133126) that make it difficult to integrate with a broader plugin ecosystem. The way for such an Appx/MSIX tool to expose tab-completer could be just running the tool with a special flag, such as `<tool> --ps-completion`, |
There was a problem hiding this comment.
And note, the MSIX discovery+access recommendations are easily secure. Package content is strongly secure and verifiably trustable. When Powershell uses a package's content it can be assured the content is authentic and unmodified from the developer.
Package authors can be Store signed or declare
<uap10:PackageIntegrity>
<uap10:Content Enforcement="on"/>
</uap10:PackageIntegrity>
and its content is hardened even a process running as LocalSystem can't alter the package's files. Powershell can use various APIs to confirm the package is signed and is unmodified e.g. Package.SignatureKind, Package.Status.VerifyIsOK(), Package.VerifyContentIntegrityAsync(). This can jive well with Powershell's ExecutionPolicy options.
| so for a predictor to be auto-discovered, it has to be loaded at the startup of an interactive session. | ||
|
|
||
| Given that, maybe it's better to have a unified location for all the load-at-startup configurations: | ||
|
|
There was a problem hiding this comment.
MSIX packages can explicitly declare this sort of information e.g.
<uap3:Extension
Category="windows.appExtension">
<uap3:AppExtension
Name="com.microsoft.powershell.completer"
Id="winget.completer"
PublicFolder="public"
DisplayName="WinGet Completer">
<uap3:Properties>
<Tab-Completers>
<Tab-Completer Type="Powershell" Name="foo" File="foo\bar.s1" />
<Tab-Completer Type="WinRT" Name="blah" ActivatableClassId="blah.de.blah.de.blah" Load="startup"/>
<Tab-Completer Type="DLLExport" Name="meh" File="in\proc.dll" Function="CallMe" Discovery="auto"/>
</Tab-Completers>
</uap3:Properties>
</uap3:Extension>
The data under <uap3:Properties> can be as elaborate as you need and desire.
This model can securely provide explicit details and avoid ambiguitities (No startup folder? Is that intentional? Accidentally deleted? CHKDSK removed it? Malware tampered?).
This also offers better performance than walking the filesystem and reading/parsing files to determine what to do.
And it ensures clean behavior on uninstall - packages registered for the user are discoverable and cleanly become invisible when the package is deregistered for the user. No concerns about incomplete uninstalls or manual hackery or otherwise 'winrot' leaving the system in inconsistent states.
There was a problem hiding this comment.
BTW if you need to have an ordered list of packaged extensions the common solutions are implicit-ordering-by-name and explicit-ordering.
AppExtensionCatalog etc enumerate packages but the results are an unordered list. If you're concerned about collisions then searching in a list needs a deterministic way to avoid collisions if possible and mitigate/control impact when not. Couple of common options:
1. Implicit Order
You can order the list based on values in the data, e.g. given
<uap3:Extension
Category="windows.appExtension">
<uap3:AppExtension
Name="com.microsoft.powershell.completer"
Id="winget.completer"
...
you can build a list of tab-completers sorted by [Id, PackageFullName] (not just name as that's not guaranteed unique across packages).
One example is how MSIX' package graph orders Resource packages for a given Main package (alphabetically by the Resource package's Name).
One downside is this can lead to perverse games by not-good-natured actors, e.g. Id="AAAAAAAA.winget.completer" cause Me! Me! Me! I want to appear at the top of lists causes I'm so important...
2. Explicit Order
A common alternative is for the extension to explicitly tell you a hint where they'd like to appear in the search order e.g. Rank in the example
<uap3:Extension
Category="windows.appExtension">
<uap3:AppExtension
..
<uap3:Properties>
<Tab-Completers>
<Tab-Completer Type="Powershell" Name="foo" Rank="123" File="foo\bar.s1" />
<Tab-Completer Type="WinRT" Name="blah" ActivatableClassId="blah.de.blah.de.blah" Load="startup"/>
<Tab-Completer Type="DLLExport" Name="meh" Rank="-500" File="in\proc.dll" Function="CallMe" Discovery="auto"/>
</Tab-Completers>
</uap3:Properties>
</uap3:Extension>
where Rank=integer, default=0, and the end list is sorted small to large (-infinity...0...+infinity).
MSIX does this via AddPackageDependency() as one example. Visual C++ did a similar thing via a #pragma where you could specify when in the order of global constructors a symbol? .obj? should appear.
There was a problem hiding this comment.
Of course this data is how package authors define their content. Powershell could provide overrides if you so choose, e.g. if my profile has
$tab_completers_override = []{ "winget.completer" = []{ "rank"=100 } }
where you enumerate package extension info and selectively override data with override values (if any). This makes it easy for devs to DoTheirThing(TM) and ItJustWorks(TM) with the user/admin still in control to override devs if need be (e.g. "Yes foo is defined to appear in the ordered list before bar but that's problematic for me so force foo to appear later, 'cause, I say so. My machine, my rules. root administrator owns the machine" 🙂)
In such cases 'overrides' are a rarely used but invaluable option when needed. Mostly things just work but when there's a complication there's a (simple and reasonable) way around it.
Food for thought.
There was a problem hiding this comment.
ping
Been a while. Any update on this concept to share here at this time?
|
@iSazonov There are mainly 2 reasons for not choosing to extend module manifest for the auto-discovery/auto-loading for completers, feedback providers, and potentially predictors.
There are other reasons too, such as
|
|
@daxian-dbw Thanks for the clarifications! I'm somewhat discouraged as I was expecting arguments like "it's impossible for security reasons" or "it's technically impossible". I was convinced that modules are the preferred distribution method for public use. The manifests were designed just for convenience. Therefore, I am perplexed that significant efforts will be directed not at expanding the capabilities of the modules, but at creating just another config/plug-in model. I guess the only reason to create new code is if you need to do it quickly and avoid regression. Although I don't see how the implementation of these requirements would affect the existing module functionality. Also, I don't see the need to enable and disable on the fly. Since we are talking about an interactive session, the user installs the module if he needs it and deletes it otherwise in seconds. It's always been like this and it wasn't a problem. Or are you talking about a public pre-prepared environment where the user does not have any rights at all, then in any case he will not be able to disable any feature. If there are some rights, it means that it can save the desired state or configuration. In this case, why not make the solution more general and allow it to enable and disable loading of any module? The costs of implementation are the same, the benefits are greater.
I don't understand this argument. Creating or updating a manifest file is not a problem at all.
If we talk about my proposal, then there is only a naming convention, i.e. this code does not change. |
@iSazonov not all existing modules will be updated, and we cannot ask users to update a module locally that they don't own.
It's very possible that a user needs the module, which offers more than completer/feedback provider/predictor, but just don't want the module to participate in auto-discovery/auto-loading for tab completion/feedback provider etc.
The main reasons and some other concerns are listed out clearly in my last comment. You can read again to make sure you don't miss any points. |
Any popular project will do this immediately without a doubt. If a project is frozen and cannot be officially updated, users can always use the traditional profile approach. Many requests for new features have been rejected by the team precisely because there is a simple workaround.
But auto-upload has been working exactly like this for all these years and there have not been many requests for changes.
It's just that these arguments didn't convince me that it was necessary to create something completely separate. I believe that expanding existing capabilities and creating a more general solution will bring more benefits. |
|
This started as a general comment on the RFC, but grew into almost a counter-proposal as I was writing it, not sure if a comment is the right format. I'm excited that there's an RFC, huge thanks for pushing this forward. :) Overall, I feel like the proposal makes sense and it's in a very good shape, but I'm somewhat worried that what it proposes is a local optimum, not a global one. Below, I describe problems I see both with the current state and this RFC and alternative solutions I see. Even with this RFC, a command author needs to provide completions for each shell separately, in the form of an executable script, and place the completion script in the right directory for the script. This adds extra burden for multiple parties:
I see a few general solutions to the listed problems: Completion lookupPlace the completion script somewhere near the binary, not in a PowerShell-specific location. I have multiple ideas on how to make this work (e.g., the binary could embed the completion script as a resource), but the one I like the most is to look for a Since command lookup is also done through PATH, this means that for the command author, it is enough to place the file next to the binary and completions will auto-magically work. At the same time, others can provide completions for packages they did not author, just by ensuring that the completion file is on PATH. Conflict resolution should be mostly for free, since there are established conventions around how PATH lookup is done. This also feels more friendly towards MSIX applications, although the exact loading mechanism would likely have to be adapted, since the completion script probably cannot be exposed as an app alias. I'm not sufficiently knowledgeable about MSIX internals to comment on this. Static completion formatInstead of a completion script, standardize a mostly static declarative completion format that describes the structure of the command (parameter format, parameter names and descriptions, subcommands,...). Since most CLI libraries are configured in a static way (e.g., with an annotated class/struct), a format that covers the vast majority of common apps is imo feasible, and not overly complex (I've quickly gone through the most popular CLI libraries to verify this, I'd need a deeper analysis to give a more educated guess). For the remaining apps (e.g., ffmpeg with its filter syntax), the format could provide hooks for specific parts of the metadata; i.e., "if you want to get parameter info for this subcommand, call this executable with these parameters". The shell could then decide whether it wants to call the hook, or just use the static portion of the metadata. I'm fully aware that this is a much more complex proposal than this RFC, but similarly to how VS Code standardized LSP, PowerShell could solve this problem once and given the size of the project, have a chance of reaching the critical adoption mass to make this an accepted standard. Aside: Based on a random mention on lobste.rs, I understand that someone on the .NET team was experimenting with something similar to this, but I have no closer information, I'll ask around internally. :) Module-based implementationAn alternative option (both to the RFC and the PATH-based lookup proposed above) would be to re-use modules for this, as @iSazonov already suggested above, although I prefer the above proposals. I feel like the specific pain points mentioned in the response are all solvable, and it avoids adding yet another place where PowerShell looks for scripts, while providing the completion authors with more flexibility:
If an installer can add the completion script and a manifest to a directory, it doesn't seem significantly harder to add a module to the main
I think we should surface this in PowerShell anyway (
I'm probably missing something, but I don't see why this would be significantly different for a module as opposed to a dedicated manifest in a different directory.
A user can provide their own module that declares the completers and import another module, without waiting for the author of another module actually providing the completions to update it.
That's a good point, and I'm not qualified to comment on it. However, it feels like an implementation of the RFC will need to handle many of the same pain points that are already handled for module loading. |
|
|
||
| The folders for feedback providers and tab-completers will be placed under the same path where modules folders are currently located: | ||
|
|
||
| - In-box path: `$PSHOME/feedbacks` and `$PSHOME/completions` |
There was a problem hiding this comment.
Nit: Shouldn't the directories use CamelCase names, to match all the other dirs? (Modules, Scripts,...)
| 6. How about on a System Lockdown Mode (SLM) or Restricted remoting environments? | ||
| - We use `.psd1` file for metadata, which can be signed if needed. | ||
|
|
||
| ### Unified Location for load-at-startup Configurations |
There was a problem hiding this comment.
Agree with this (sub)proposal. In my view, if you have a completer that needs to run on startup, there's no longer anything specific to completers in the loading process, and we should instead have a general mechanism for adding startup scripts that doesn't involve patching $PROFILE.
Among other reasons, if we add a _startup_ folder for completers, I'm quite sure that some clever application dev will go "ha, I can use that for general purpose startup hooks". :)
There was a problem hiding this comment.
One possible implementation would be to allow module manifests to specify that the module should be loading during startup. That way, we could just reuse the existing module infrastructure instead of defining yet another mechanism for loading and executing code.
| - All modules or scripts that need to be processed at session startup should have configurations deployed in the `startup` folder. | ||
|
|
||
| Each item within `startup` is a folder, whose name should be the friendly name of the component, e.g. `"UnixTabCompletion"`. | ||
| Within each sub-folder, a `.psd1` file named after the folder name should be defined to configure the auto-discovery of the component. |
There was a problem hiding this comment.
As I mentioned in a comment above, this sounds exactly like the Modules folder, except with a different manifest structure. I'd prefer to reuse modules for this rather than defining another mechanism.
| but I don't want to mess with user's profile to make the feedback provider or tab-completer discoverable. | ||
|
|
||
| As a user, I want my feedback providers and tab-completers for the specific native commands to be loaded lazily, | ||
| instead of having to use my profile to load them at session startup. |
There was a problem hiding this comment.
There are testing cases where you may want to disable this as well as showcasing the feature whilst training people on using PowerShell.
Can we get a mechanism to disable this via another parameter on pwsh, an entry in the powershell.config.json file or both?
Can we also get some data around the difference in user experience for lazyloading in terms of delays caused by waiting to load it vs having already pre-loaded these.
I don't expect in many cases the difference to be huge, but would be very much worth having (especially ran on older lower powered & locked down devices)
I ask as I feel that there are cases where it makes more sense for users to make use Session specific profile loading in some cases like I do with my Minimal Profile vs vs many using a 1 Profile to rule it all approach.
|
|
||
| For PowerShell commands (Function or Cmdlet), I presume the completion or feedback support, | ||
| if there is any, will be from the same module. | ||
| So, when the command becomes available, the feedback provider and/or tab-completer will become available too. |
There was a problem hiding this comment.
This is the ideal situation - however many modules, especially those included in Windows, don't provide this and ends up being made available either via a 3rd party module or via a users profile.
| - A tool can deploy its feedback provider and/or tab-completer without needing to update a file at a central location, such as the user's profile. | ||
| - A tool can remove its deployment cleanly without needing to update a file at a central location. | ||
| - PowerShell can discover feedback providers and tab-completers automatically, and load one based on the right trigger. | ||
| - A user can enable or disable the auto-discovery for a feedback provider or tab-completer. |
There was a problem hiding this comment.
Can we add a goal to allow the user the option of pre-load or lazily-load as there will be times when it is time intensive where pre-loading is more suitable than lazy loading?
| The proposal is to adopt the existing mechnism used in Bash, Zsh and Fish's completion system -- | ||
| have directories contain individual completion scripts for various commands and applications, | ||
| whose file names should match names of the commands. | ||
| Those completion scripts are loaded only when their corresponding commands' completion is triggered for the first time. |
There was a problem hiding this comment.
This would be true in a lazy load scenario, but not in a preload scenario
| 1. Should we add another key to indicate the target OS? | ||
| - A feedback provider may only work on a specific OS, such as the `"WinGet CommandNotFound"` feedback provider only works on Windows. |
There was a problem hiding this comment.
That would require an update to the module manifest, would it not? Or is that what you are proposing here?
| - A feedback provider may only work on a specific OS, such as the `"WinGet CommandNotFound"` feedback provider only works on Windows. | ||
| - Such a key could be handy if a user wants to share the feedback/tab-completer configurations among multiple machines via a cloud drive. | ||
|
|
||
| 2. Do we really need a folder for each feedback provider? |
| 2. Shall we disable the feature with the `-noninteractive` flag? | ||
| - `PSReadLine` is disabled when this flag is specified, so maybe this feature should be disabled too. |
| PublicFolder="public" | ||
| DisplayName="WinGet PowerShell Resources"> | ||
| <uap3:Properties> | ||
| <Completer Type="Script" Name="winget.completer" File="assets\winget.completer.s1" /> |
There was a problem hiding this comment.
| <Completer Type="Script" Name="winget.completer" File="assets\winget.completer.s1" /> | |
| <Completer Type="Script" Name="winget.completer" File="assets\winget.completer.ps1" /> |
|
I wrote a module that manages imports and exports of native tab completion scripts that allows you to add and remove them from the current session. I've been testing it for a few months and it solves some of this problem at least until a native solution is implemented. Its on the gallery called CompleterActions. |
| There is no way to auto-discover a feedback provider or a tab-completer for a specific native command. | ||
|
|
||
| As a tool author, I want to provide a feedback provider or a tab-completer along with my tool installation, | ||
| but I don't want to mess with user's profile to make the feedback provider or tab-completer discoverable. |
There was a problem hiding this comment.
Does this same solution also handle scripts and modules? Or extended to do so?
| whose file names should match names of the commands. | ||
| Those completion scripts are loaded only when their corresponding commands' completion is triggered for the first time. | ||
|
|
||
| We will have separate directories for feedback providers and tab-completers, for 2 reasons: |
There was a problem hiding this comment.
This will make it not possible for an Appx app to participate since they can't place files outside of their own installed folder
They could, but it's strongly discouraged. We've done it before in rare cases and they've all gone sideways griefing devs, users and us. They also degrade MSIX's security and integrity models. So, yeah, we prefer not to go here w/o suuuuper high business justification, and even then it usually needs a huge backwards compatibility element.
I don't see sufficient justification here, especially not with Powershell under active development. There are better answers.
| so for a predictor to be auto-discovered, it has to be loaded at the startup of an interactive session. | ||
|
|
||
| Given that, maybe it's better to have a unified location for all the load-at-startup configurations: | ||
|
|
There was a problem hiding this comment.
ping
Been a while. Any update on this concept to share here at this time?
|
|
||
| ### Expose resources with App Extension | ||
|
|
||
| MSIX' standard recommendation is for the packaged app to declare an `appExtension` in its manifest. |
There was a problem hiding this comment.
Or a packageExtension, as of 10.0.26100.0 (aka Ge)
<uap18:Extension Category="windows.packageExtension">
<uap3:PackageExtension...>
...
Similar to appExtension but other than app->package naming, it can appear in any package (windows.appExtension can only appear in Main+Optional packages)
If you don't otherwise have an application in your package (e.g. "I just want *.ps1, completers and other Powershell plugins/extenders") you can use windows.appExtension. Possible, but unnecessary friction so why bother.
|
|
||
| There are 2 blockers in this hypothetical discovery process: | ||
| - `wt.exe` is essentially a reparse point. The API to get its target is undocumented. PowerShell used to detect its target but then reverted due to the undocumented API (see [PR#10331] and [PR#16044]). | ||
| **What is the suggested way to find the Appx/MSIX package that owns an app execution alias?** |
There was a problem hiding this comment.
Grubbing through the filesystem to discover the associated package is fragile - in the 'reliability' and 'security' perspectives. Not recommended.
Likewise, a reverse lookup enumerating every package to determine its filesystem artifacts, and then (correctly) know when the latter is about to be used is also fragile in reliability and security, and poses additional perf costs. Not recommended.
windows.packageExtension (like windows.appExtension) was invented for this very reason - to 'discover' what packages offer a desired functionality, and to do so in a reliable, secure and efficient manner.
| 2. The Windows 11's implementation targets Windows 11, version 22H2 (10.0; Build 22621), and later. | ||
| It only provides C and C++ functions, no WinRT types, unlike the Windows App SDK's implementation. | ||
|
|
||
| 3. I believe the Windows 11's implementation will provide WinRT types eventually. |
There was a problem hiding this comment.
There's currently no plans or intentions to do so. WinAppSDK's WinRT types with Mdd*() passthrough to the OS APIs meet all the known needs not otherwise met by directly calling the OS Win32/C API.
If you find a scenario where this is insufficient please let us know.
| Then at startup, we get all extensions with that namespace and load all of them. | ||
| - But, would that be performant enough to do at the startup? | ||
|
|
||
| 2. How to resolve an App Execution Alias to the actual app package? (undocumented API) |
There was a problem hiding this comment.
This is currently not supported, and I'm highly doubtful it's viable or desirable, for the reasons previously stated.
|
|
||
| 2. How to resolve an App Execution Alias to the actual app package? (undocumented API) | ||
|
|
||
| 3. How to get the `AppExtension` information from a specific app package? (without parsing the app manifest ourselves) |
There was a problem hiding this comment.
I think you're right, the current API doesn't provide this today.
You can emulate it (as noted above), but an API doing this could be more efficient. If this interests you please let us know.
(IMO this would seem a worthy enhancement. But devil's in the details of course)
| 3. How to get the `AppExtension` information from a specific app package? (without parsing the app manifest ourselves) | ||
|
|
||
| 4. How to allow user to disable the auto-discovery/loading of PowerShell resources from a particular Appx/MSIX app? | ||
| - Shall we have configurations for indivual Appx/MSIX apps only for the purpose of disabling? |
There was a problem hiding this comment.
The 'allow/deny list' concept.
One way is like AppExecutionAlias in Settings, the user can individually control which are enabled or disabled (and similar knobs for administrators to control/override). Upsides to Settings management but that may not be feasible here (for technical and/or non-technical reasons). Alternatively, Powershell tracking allow/deny is simple enough e.g.
bool isEnabled = ApplicationData.LocalSettings["MSIX"]["Packages"][packagefamilyname]["Enabled"];
...
ApplicationData.LocalSettings["MSIX"]["Packages"][packagefamilyname]["Enabled"] = false;
| 5. How to cache the information that an Appx/MSIX package doesn't have any PS resources declared and avoid the auto-discovery for it when user is using the tool? | ||
|
|
||
| Given that supporting auto-discovery/loading for Appx/MSIX packages is completely different from other applications, | ||
| It warrants another RFC to design for it specifically. |
This is a draft spec for auto-discovering feedback provider and tab-completer. Relevant GitHub issues:
This change is