Application for generating reports of messages and most active users in Slack channels, with AI-powered thread summarization.
.
├── backend/ # FastAPI Server (app)
├── frontend/ # React Application
└── README.md
-
Navegar al directorio del frontend:
cd frontend -
Instalar dependencias:
npm install # o yarn install -
Iniciar el servidor de desarrollo:
npm start # o yarn start -
Acceder a la aplicación:
- La aplicación estará disponible en
http://localhost:3000 - Asegúrate de que el backend esté ejecutándose en
http://localhost:5000
- La aplicación estará disponible en
-
Configurar variables de entorno (ver sección Backend para detalles)
-
Ejecutar con Docker:
docker-compose up --build
-
El backend estará disponible en
http://localhost:5000
- Node.js (version 14 or higher)
- npm or yarn
- Navigate to the frontend directory:
cd frontend- Install dependencies:
npm install
# or
yarn install- React 18
- TypeScript
- Tailwind CSS
- Heroicons (for icons)
- React Router (for navigation)
frontend/
├── src/
│ ├── components/ # Reusable components
│ │ ├── Sidebar.tsx # Navigation sidebar
│ │ ├── ReportForm.tsx # Report generation form
│ │ └── TaskStatus.tsx # Generated reports status
│ ├── types/ # TypeScript definitions
│ │ └── index.ts # Interfaces and types
│ └── App.tsx # Main component
├── package.json # Dependencies and scripts
└── tsconfig.json # TypeScript configuration
- Ensure the backend is running at
http://localhost:5000 - If the backend is at a different URL, update the
API_URLconstant inApp.tsx
To start the development server:
npm start
# or
yarn startThe application will be available at http://localhost:3000
-
Report Generation
- Select report type (messages or top repliers)
- Enter channel ID
- Specify date range
- For top repliers, indicate how many users to show
-
Thread Summarization (NEW)
- Generate AI-powered summaries of Slack threads
- Support for OpenAI and Ollama LLM providers
- Input thread timestamp and channel ID
-
Report Tracking
- View status of generated reports
- Manually update status
- Download results in CSV or JSON format
-
Sidebar
- Navigation between "Generate Reports" and "Report Status"
- Responsive and user-friendly design
-
ReportForm
- Intuitive form for report generation
- Field validation
- Report type selection with icons
-
TaskStatus
- Display of each report's status
- Update and download options
- Detailed report information
- Uses Tailwind CSS for styling
- Responsive design
- Light and dark themes
- Heroicons for icons
For local development:
- Clone the repository
- Install dependencies
- Start the development server
Make sure you have the following components installed:
- Python 3.8+
- Docker
- Docker Compose
- Redis (for Celery)
- Slack API Token
- OpenAI API Key (optional, for OpenAI summarization)
- Ollama (optional, for local LLM summarization)
-
Clone the repository:
git clone https://github.com/pamelars86/slack-reports.git cd slack-reports -
Create a
.envfile in the root of the project with the following content:# Slack Configuration SLACK_TOKEN=your_slack_api_token SLACK_HOME="https://your-organization.slack.com" # Redis Configuration CELERY_BROKER_URL="redis://redis:6379/0" result_backend="redis://redis:6379/0" # OpenAI Configuration (optional) OPENAI_API_KEY=your_openai_api_key OPENAI_MODEL=gpt-3.5-turbo # Ollama Configuration (optional) OLLAMA_HOST=http://localhost:11434 OLLAMA_MODEL=llama3
-
Build and start the Docker containers:
docker-compose up --build
-
Start the server:
docker-compose up
The application now supports AI-powered summarization of Slack threads using either OpenAI or Ollama:
- OpenAI Integration: Uses GPT models for high-quality summaries
- Ollama Integration: Uses local LLM models for privacy-focused summarization
- Configurable Prompts: Prompts are stored in
app/prompts.ymlfor easy customization - Flexible Architecture: Abstract interface allows easy addition of new LLM providers
LLMInterface: Abstract base class for all LLM implementationsOpenAILLM: OpenAI API implementationOllamaLLM: Ollama local model implementationLLMFactory: Factory pattern for creating appropriate LLM instances- Prompts stored in YAML for easy modification without code changes
To use the endpoints for generating reports, follow these steps:
-
Fetch Messages: Use the
/fetch-messagesendpoint to initiate the process of fetching messages from Slack. This operation is asynchronous and will return atask-id. -
Top Repliers: Use the
/top-repliersendpoint to generate a report of the top repliers in your Slack workspace. This operation is also asynchronous and will return atask-id. -
Thread Summarization (NEW): Use the
/summarize-threadendpoint to generate AI summaries of Slack threads:{ "channel_id": "C1234567890", "thread_ts": "1748458889.115369", "llm_provider": "openai" } -
Check Task Status: To check the status of your task, use the
/task-status/{task-id}endpoint. Replace{task-id}with the actual task ID you received from the previous endpoints.
For detailed information on the input and output of these endpoints, refer to the Swagger documentation available at http://localhost:5000/apidocs/.
You can find the API documentation in Swagger by accessing the following URL once the server is running:
http://localhost:5000/apidocs/
To generate reports and summaries, make sure all services are running and use the endpoints documented in Swagger.
Contributions are welcome. Please open an issue or a pull request to discuss any changes you would like to make.
This project is licensed under the MIT License. See the LICENSE file for more details.