Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added .DS_Store
Binary file not shown.
2 changes: 1 addition & 1 deletion Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[package]
name = "you"
version = "0.1.80"
version = "0.1.90"
edition = "2024"
description = "Translate your natural language into executable command(s)"
authors = ["Xinyu Bao <baoxinyuworks@163.com>"]
Expand Down
177 changes: 177 additions & 0 deletions PROJECT_STRUCTURE_AND_DESIGN.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,177 @@
# You - Project Structure and Design Documentation

## Project Overview

`you` is a Rust-based command-line tool that translates natural language instructions into executable shell commands using Large Language Models (LLMs). The project follows a modular architecture with clear separation of concerns.

## Project Structure

```
you/
├── .github/
│ └── workflows/
│ └── release.yml # GitHub Actions for releases
├── src/
│ ├── agents/ # AI agent implementations
│ │ ├── command_json.rs # JSON command structures
│ │ ├── command_line_explain_agent.rs # Command explanation agent
│ │ ├── mod.rs # Module declarations
│ │ ├── semi_autonomous_command_line_agent.rs # Main command agent
│ │ └── traits.rs # Agent trait definitions
│ ├── arguments.rs # CLI argument parsing
│ ├── cache.rs # Command caching system
│ ├── configurations.rs # User configuration management
│ ├── constants.rs # Application constants
│ ├── helpers.rs # Utility functions
│ ├── information.rs # System context gathering
│ ├── llm.rs # LLM client and communication
│ ├── main.rs # Application entry point
│ ├── shell.rs # Shell command execution
│ ├── styles.rs # UI styling and formatting
│ └── traits.rs # Global trait definitions
├── Cargo.toml # Rust package configuration
├── README.md # Project documentation
├── README_CN.md # Chinese documentation
├── objective.md # Project objectives
├── setup.sh # Installation script
├── update.sh # Update script
└── uninstall.sh # Uninstallation script
```

## Architecture Design

### Core Components

#### 1. Agent System (`src/agents/`)
The project uses an agent-based architecture for processing natural language commands:

- **SemiAutonomousCommandLineAgent**: Main agent that breaks down natural language into executable commands
- **CommandLineExplainAgent**: Specialized agent for explaining existing commands
- **Traits**: Define common interfaces for agent behavior (`Step`, `Context`, `AgentExecution`)

#### 2. LLM Integration (`src/llm.rs`)
- Abstracts OpenAI API communication
- Supports configurable API endpoints and models
- Implements context management for conversation history
- Uses async/await pattern with Tokio runtime

#### 3. Configuration System (`src/configurations.rs`)
- JSON-based configuration storage
- User preferences for CLI tools
- Cache enablement settings
- Global resource initialization pattern

#### 4. Caching System (`src/cache.rs`)
- Stores previously generated commands for reuse
- File-based storage in user's home directory
- Search functionality for command retrieval

#### 5. Command Processing Pipeline
```
Natural Language Input → Agent Processing → LLM Translation →
User Confirmation → Command Execution → Optional Caching
```

### Design Patterns

#### 1. Trait-Based Architecture
- **Context Trait**: Manages conversation history and message handling
- **Step Trait**: Defines workflow steps for agents
- **GlobalResourceInitialization**: Standardizes resource setup

#### 2. Command Pattern
- Commands are represented as structured JSON objects
- Different action types (Execute, Explain, etc.)
- Separation of command representation from execution

#### 3. Strategy Pattern
- Different agents for different types of operations
- Pluggable LLM backends through configuration
- Configurable command interpreters

#### 4. Builder Pattern
- Used extensively with OpenAI API message construction
- Clap command-line argument building

### Key Dependencies

- **async-openai**: OpenAI API client
- **clap**: Command-line argument parsing
- **tokio**: Async runtime
- **serde**: Serialization/deserialization
- **anyhow**: Error handling
- **cchain**: Display and UI utilities
- **console**: Terminal styling
- **indicatif**: Progress indicators

## Data Flow

### 1. Initialization Phase
```
Parse CLI Arguments → Initialize Configurations →
Initialize Cache → Load Contextual Information
```

### 2. Command Processing Phase
```
User Input → Agent Creation → LLM Context Setup →
Natural Language Processing → Command Generation →
User Confirmation → Execution → Optional Caching
```

### 3. Interactive Mode
```
Continuous Loop: User Input → Processing → Execution →
Context Update → Next Input
```

## Security Considerations

- API keys stored in environment variables
- User confirmation required before command execution
- No hardcoded credentials in source code
- Shell command validation through LLM processing

## Extensibility Points

1. **New Agent Types**: Implement `Step` and `Context` traits
2. **Additional LLM Providers**: Extend LLM client abstraction
3. **Custom Command Types**: Add new action types to command JSON
4. **Storage Backends**: Implement alternative caching mechanisms
5. **UI Enhancements**: Extend styling and display utilities

## Configuration

### Environment Variables
- `YOU_OPENAI_API_BASE` / `DONE_OPENAI_API_BASE`: API endpoint
- `YOU_OPENAI_API_KEY` / `DONE_OPENAI_API_KEY`: Authentication key
- `YOU_OPENAI_MODEL` / `DONE_OPENAI_MODEL`: Model selection

### User Configuration File
- Located in user's home directory
- JSON format for preferences and settings
- Cache enablement and CLI preferences

## Error Handling

- Uses `anyhow` crate for error propagation
- Comprehensive error messages for user guidance
- Graceful handling of LLM API failures
- Validation of user inputs and system requirements

## Performance Considerations

- Async/await for non-blocking LLM API calls
- Local caching to reduce API calls
- Efficient file-based storage for command history
- Progress indicators for long-running operations

## Future Enhancements

1. Support for additional shell interpreters
2. Enhanced caching with semantic search
3. Plugin system for custom agents
4. Web interface for command management
5. Integration with additional LLM providers
6. Command validation and safety checks
7. Batch command processing capabilities
18 changes: 18 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,24 @@ Start a conversational session to run multiple related commands:
you run
```

### Cache Management

List all cached scripts:

```bash
you list
# or use the alias
you ls
```

Remove a specific cached script:

```bash
you remove <script_name>
# or use the alias
you rm <script_name>
```

### Configure your preferred CLI

You may want to use `fd` over `find`, or prefer using a different CLI rather than letting the LLM guess. In this case, you may update the configuration file located at `~/.you/configurations.json`. Below is an example:
Expand Down
18 changes: 18 additions & 0 deletions README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,24 @@ you explain "find . -type f -name '*.txt' -size +10M"
you run
```

### 缓存管理

列出所有缓存的脚本:

```bash
you list
# 或使用别名
you ls
```

删除特定的缓存脚本:

```bash
you remove <脚本名称>
# 或使用别名
you rm <脚本名称>
```

### 配置您偏好的命令行工具

您可能想使用 `fd` 而不是 `find`,或者偏好使用特定的命令行工具而不是让 LLM 猜测。在这种情况下,您可以更新位于 `~/.you/configurations.json` 的配置文件。以下是一个示例:
Expand Down
20 changes: 20 additions & 0 deletions src/arguments.rs
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,17 @@ pub struct Arguments {
pub enum Commands {
/// Run a command that is described in natural langauge.
/// The LLM will breakdown the task and executes them.
#[clap(short_flag = 'r')]
Run(RunArguments),
/// Explain a given command
#[clap(short_flag = 'e')]
Explain(ExplainArguments),
/// List all saved scripts in the cache.
#[clap(visible_alias = "ls")]
List(ListArguments),
/// Remove a specified script from the cache.
#[clap(visible_alias = "rm")]
Remove(RemoveArguments),
/// Display the version of `you`
#[clap(short_flag = 'v')]
Version(VersionArguments),
Expand All @@ -52,6 +60,18 @@ pub struct ExplainArguments {
pub command: String,
}

#[derive(Debug, Args)]
#[command(group = clap::ArgGroup::new("sources").required(false).multiple(false))]
pub struct ListArguments;

#[derive(Debug, Args)]
#[command(group = clap::ArgGroup::new("sources").required(true).multiple(false))]
pub struct RemoveArguments {
/// Name of the script to remove from cache
#[arg(group = "sources")]
pub script_name: String,
}

#[derive(Debug, Args)]
#[command(group = clap::ArgGroup::new("sources").required(false).multiple(false))]
pub struct VersionArguments;
61 changes: 54 additions & 7 deletions src/cache.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
use std::{fs::{create_dir, read_dir, DirEntry}, path::PathBuf};
use std::{
fs::{DirEntry, File, create_dir, read_dir},
io::Write,
path::PathBuf,
};

use anyhow::Result;
use anyhow::{Result, anyhow};

use crate::{
constants::YOU_CACHE_DIRECTORY,
Expand Down Expand Up @@ -38,25 +42,68 @@ impl GlobalResourceInitialization for Cache {
}
}

Ok(
Self { scripts }
)
Ok(Self { scripts })
}
}

impl Cache {
/// Refreshes the in-memory scripts list by re-reading from the cache directory
pub fn refresh_scripts(&mut self) -> Result<()> {
self.scripts.clear();
for file in read_dir(acquire_you_home_directory()?.join(YOU_CACHE_DIRECTORY))? {
let file: DirEntry = file?;
if file.metadata().unwrap().is_file() {
let filename: String = file.file_name().to_string_lossy().to_string();
if filename.ends_with(".sh") {
self.scripts.push(file.path());
}
}
}
Ok(())
}

pub fn search(&self, query: &str) -> Option<&PathBuf> {
for script in self.scripts.iter() {
let script_name: &str = script.file_stem().unwrap().to_str().unwrap();
if query == script_name {
return Some(&script);
}
}

None
}

pub fn add_new_script(&mut self, script_name: &str, script_content: &str) -> Result<()> {
let you_cache_directory: PathBuf = acquire_you_home_directory()?.join(YOU_CACHE_DIRECTORY);

let mut file: File =
std::fs::File::create_new(you_cache_directory.join(format!("{}.sh", script_name)))?;
Comment thread
AspadaX marked this conversation as resolved.
file.write(script_content.as_bytes())?;

// Update in-memory scripts after successful file creation
self.refresh_scripts()?;

Ok(())
}

pub fn list_scripts(&self) -> Vec<String> {
self.scripts
.iter()
.map(|script| script.file_stem().unwrap().to_str().unwrap().to_string())
.collect()
}

pub fn delete_script(&mut self, script_name: &str) -> Result<()> {
for script in self.scripts.iter() {
let current_script_name: &str = script.file_stem().unwrap().to_str().unwrap();
if current_script_name == script_name {
std::fs::remove_file(script)?;
// Update in-memory scripts after successful file deletion
self.refresh_scripts()?;
return Ok(());
}
}

Err(anyhow!("Script '{}' not found", script_name))
}
}
13 changes: 12 additions & 1 deletion src/configurations.rs
Original file line number Diff line number Diff line change
Expand Up @@ -19,11 +19,22 @@ impl Display for PreferredCLI {
}
}

#[derive(Debug, Serialize, Deserialize, Clone, Default)]
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct Configurations {
#[serde(default)]
pub enable_cache: bool,
preferred_clis: Vec<PreferredCLI>,
}

impl Default for Configurations {
fn default() -> Self {
Self {
enable_cache: false,
preferred_clis: vec![],
}
}
}

impl Display for Configurations {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str(&serde_json::to_string_pretty(&self).unwrap())
Expand Down
Loading
Loading