ConnectAI-E/feishu-openai
🎒 飞书 ×(GPT-4 + GPT-4V + DALL·E-3 + Whisper)= 飞一般的工作体验 🚀 语音对话、角色扮演、多话题讨论、图片创作、表格分析、文档导出 🚀
Slowing — last commit 10mo ago
weakest axiscopyleft license (GPL-3.0) — review compatibility
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓Last commit 10mo ago
- ✓7 active contributors
- ✓GPL-3.0 licensed
Show all 8 evidence items →Show less
- ✓CI configured
- ✓Tests present
- ⚠Slowing — last commit 10mo ago
- ⚠Concentrated ownership — top contributor handles 70% of recent commits
- ⚠GPL-3.0 is copyleft — check downstream compatibility
What would change the summary?
- →Use as dependency Concerns → Mixed if: relicense under MIT/Apache-2.0 (rare for established libs)
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Forkable" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/connectai-e/feishu-openai)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/connectai-e/feishu-openai on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: ConnectAI-E/feishu-openai
Generated by RepoPilot · 2026-05-09 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/ConnectAI-E/feishu-openai shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
WAIT — Slowing — last commit 10mo ago
- Last commit 10mo ago
- 7 active contributors
- GPL-3.0 licensed
- CI configured
- Tests present
- ⚠ Slowing — last commit 10mo ago
- ⚠ Concentrated ownership — top contributor handles 70% of recent commits
- ⚠ GPL-3.0 is copyleft — check downstream compatibility
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live ConnectAI-E/feishu-openai
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/ConnectAI-E/feishu-openai.
What it runs against: a local clone of ConnectAI-E/feishu-openai — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in ConnectAI-E/feishu-openai | Confirms the artifact applies here, not a fork |
| 2 | License is still GPL-3.0 | Catches relicense before you depend on it |
| 3 | Default branch master exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 333 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of ConnectAI-E/feishu-openai. If you don't
# have one yet, run these first:
#
# git clone https://github.com/ConnectAI-E/feishu-openai.git
# cd feishu-openai
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of ConnectAI-E/feishu-openai and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "ConnectAI-E/feishu-openai(\\.git)?\\b" \\
&& ok "origin remote is ConnectAI-E/feishu-openai" \\
|| miss "origin remote is not ConnectAI-E/feishu-openai (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(GPL-3\\.0)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"GPL-3\\.0\"" package.json 2>/dev/null) \\
&& ok "license is GPL-3.0" \\
|| miss "license drift — was GPL-3.0 at generation time"
# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
&& ok "default branch master exists" \\
|| miss "default branch master no longer exists"
# 4. Critical files exist
test -f "code/main.go" \\
&& ok "code/main.go" \\
|| miss "missing critical file: code/main.go"
test -f "code/initialization/config.go" \\
&& ok "code/initialization/config.go" \\
|| miss "missing critical file: code/initialization/config.go"
test -f "code/handlers/handler.go" \\
&& ok "code/handlers/handler.go" \\
|| miss "missing critical file: code/handlers/handler.go"
test -f "code/services/openai/common.go" \\
&& ok "code/services/openai/common.go" \\
|| miss "missing critical file: code/services/openai/common.go"
test -f "code/services/sessionCache.go" \\
&& ok "code/services/sessionCache.go" \\
|| miss "missing critical file: code/services/sessionCache.go"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 333 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~303d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/ConnectAI-E/feishu-openai"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
A Go-based Feishu (ByteDance's enterprise collaboration platform) bot that integrates OpenAI's GPT-4, GPT-4V, DALL·E-3, and Whisper APIs to enable multi-modal AI interactions directly within Feishu chat. Users can conduct voice conversations, image analysis, text-to-image generation, and role-based AI discussions entirely through Feishu's native interface. Modular handler-based architecture: code/handlers/ contains request dispatchers split by feature (card actions: card_*_action.go, event handling: event_*_action.go), code/services/ abstracts OpenAI integrations (services/openai/{audio,picture,gpt3,stream,billing}.go) and caching (msgCache.go), code/initialization/ bootstraps config and Feishu client. Single-binary deployment with Gin web framework for webhook endpoint.
👥Who it's for
Enterprise teams and individuals using Feishu who want to augment their collaboration environment with advanced AI capabilities (GPT-4 reasoning, vision analysis, voice transcription, image generation) without leaving the chat application. Developers deploying this need Go expertise and familiarity with OpenAI API + Feishu's event callback system.
🌱Maturity & risk
Actively developed with production-readiness signals: binary releases via GitHub Actions (see .github/workflows/binary-release.yml), Docker containerization (Dockerfile present), configuration examples (config.example.yaml), and role preset system (role_list.yaml). However, multiple features remain marked WIP (🚧): history replay, admin mode, document interaction, and PPT generation suggest the project is feature-complete for core chat but still expanding enterprise capabilities.
Moderate production risk: single Go module (code/go.mod) with 15+ direct dependencies including github.com/sashabaranov/go-openai (community-maintained, not official OpenAI SDK), and time-sensitive integrations with OpenAI/Feishu APIs that may break on upstream changes. No visible test coverage in file list except one gpt3_test.go. Commercial fork (ConnectAI-E) mentioned in README suggests upstream maintenance may fragment. Requires sensitive API keys (OPENAI_API_KEY, FEISHU_* credentials) managed via config.
Active areas of work
Feature expansion visible in WIP markers: document interaction integration, conversation history recovery, admin mode access controls, and presentation generation from chat. The GitHub Actions workflows indicate active CI/CD setup for binary releases and Docker publishing. Recent work likely focuses on enterprise features given the commercial support section in README and mention of '奇绩创坛' investment.
🚀Get running
git clone https://github.com/ConnectAI-E/feishu-openai.git
cd feishu-openai/code
cp config.example.yaml config.yaml
# Edit config.yaml with OPENAI_API_KEY, FEISHU_APP_ID, FEISHU_APP_SECRET, etc.
go mod download
go run main.go
Daily commands:
cd code && go run main.go
```Server starts on configured port (likely `:8080` based on Gin defaults), listens for Feishu event POST callbacks. Requires `config.yaml` with valid credentials.
🗺️Map of the codebase
code/main.go— Entry point that initializes the Gin server, Lark client, and configuration—every contributor must understand the startup flow.code/initialization/config.go— Loads configuration from YAML and environment variables; critical for understanding how the bot reads API keys and feature flags.code/handlers/handler.go— Core request router that dispatches Lark events to specialized handlers; the nervous system of the entire application.code/services/openai/common.go— Common OpenAI client setup and utilities; foundational for all AI integrations (GPT, Vision, DALL-E, Whisper).code/services/sessionCache.go— Session state management using in-memory cache; essential for maintaining multi-turn conversation context.code/handlers/event_msg_action.go— Handles incoming chat messages from Lark; demonstrates the primary request flow pattern used throughout handlers.code/initialization/lark_client.go— Configures the Lark SDK client with credentials; required for all Feishu API interactions.
🛠️How to make changes
Add a New OpenAI Model Integration
- Create a new service file in code/services/openai/ (e.g., embeddings.go) with a function that calls the OpenAI API (
code/services/openai/embeddings.go (new)) - Register the new handler in code/handlers/handler.go to route requests to your new service (
code/handlers/handler.go) - Add a corresponding event handler in code/handlers/ (e.g., event_embeddings_action.go) to parse Lark payloads and call your service (
code/handlers/event_embeddings_action.go (new)) - Update code/config.example.yaml to document any new configuration parameters (
code/config.example.yaml)
Add a New AI Role/Persona
- Edit code/role_list.yaml and add a new entry with role name, description, and system prompt (
code/role_list.yaml) - The role will be automatically loaded on startup by code/initialization/roles_load.go (
code/initialization/roles_load.go) - Users can select the role via card actions; the sessionCache will store the selected role for that user (
code/services/sessionCache.go)
Add a New Card Action (UI Button/Interaction)
- Create a new handler file in code/handlers/ named card_<action_name>_action.go following the pattern of existing card handlers (
code/handlers/card_<action_name>_action.go (new)) - Register the action type in code/handlers/handler.go's routing logic to call your new handler (
code/handlers/handler.go) - Return a Lark card JSON response with buttons/inputs; Feishu will POST back to your handler when users interact (
code/handlers/common.go (reference SendCardMessage function))
Implement Rate Limiting or Request Queuing
- Review the loadbalancer pattern in code/services/loadbalancer/loadbalancer.go for key rotation (
code/services/loadbalancer/loadbalancer.go) - Extend or create a new rate limiter in code/services/ (e.g., rateLimiter.go) using go-cache or a similar library (
code/services/rateLimiter.go (new)) - Call the rate limiter in handlers before invoking OpenAI services (
code/handlers/event_msg_action.go)
🔧Why these technologies
- Go 1.18 + Gin — Lightweight, fast HTTP server for handling Feishu webhook events with minimal latency; goroutines enable concurrent request handling and streaming responses.
- Lark SDK (oapi-sdk-go) — Official Feishu API client; provides type-safe message, card, and user interaction management.
- OpenAI SDK (go-openai) — Standardized client for GPT, Vision, DALL-E, and Whisper; handles auth, streaming, and error retries.
- In-Memory Cache (go-cache) — Fast, simple session storage for conversation history and user state without external dependencies; suitable for single-instance or small deployments.
- YAML Configuration — Human-readable format for API keys, feature toggles, and role definitions; easy for non-developers to customize.
⚖️Trade-offs already made
-
In-memory session cache instead of persistent database
- Why: Reduces latency and operational complexity; no external database to manage
- Consequence: Sessions lost on restart; does not scale to multiple instances without shared state (sticky sessions or Redis)
-
Streaming responses from OpenAI directly to Lark
- Why: Provides real-time token delivery, improving perceived responsiveness
- Consequence: Higher complexity in response formatting; risk of partial messages if connection drops mid-stream
-
Single Go binary deployed via Docker
- Why: Simple deployment, minimal ops overhead, fast startup
- Consequence: Harder to scale horizontally without load balancer and session replication
-
Support multiple API keys via load balancer
- Why: Allows rotation to avoid rate limits and costs across multiple OpenAI accounts
- Consequence: Adds complexity; requires careful coordination if keys have different models/capabilities
🚫Non-goals (don't propose these)
- Does not provide user authentication or authorization (assumes all Lark workspace members are trusted)
- Does not store
🪤Traps & gotchas
Config must exist: Application crashes if config.yaml missing—use config.example.yaml as template and populate all OPENAI_API_KEY, FEISHU_APP_ID, FEISHU_APP_SECRET vars. Feishu webhook signature: incoming requests must match Feishu's HMAC-SHA256 signature (verified in handlers/common.go likely); misconfiguration silently drops events. Token limits: no automatic request size validation before sending to OpenAI—large file uploads or context may silently fail or exceed billing. Cache expiration: msgCache.go uses in-memory cache with hardcoded TTL; restarting bot clears all conversation history (no persistence layer visible). Rate limiting: services/loadbalancer/ suggests multi-token load balancing but implementation sparse—single token exhaustion blocks all users. Feishu SDK version lock: go.mod pins oapi-sdk-go/v3 v3.0.14; Feishu API breaking changes require manual module upgrade. Audio codec: pion/opus dependency required for voice features; missing system opus library causes runtime panic on some Linux distros.
🏗️Architecture
💡Concepts to learn
- Message stream processing with Server-Sent Events (SSE) — GPT streaming responses in
services/openai/stream.gouse chunked transfer encoding; understanding SSE parsing is essential for building real-time chat features without blocking Feishu UI - Token-based load balancing — The
services/loadbalancer/module distributes requests across multiple OpenAI API keys to avoid rate limits; critical for scaling beyond single-account constraints - Webhook signature verification (HMAC-SHA256) — Feishu sends requests with cryptographic signatures to prove authenticity; replay attacks and spoofing are prevented by validating signatures in request handlers
- In-memory cache with TTL (Time-To-Live) —
msgCache.gousesgo-cacheto store conversation context without a database; understanding cache expiration and eviction is crucial for multi-turn chat reliability - Multi-modal AI integration patterns — This bot routes user input to different OpenAI endpoints (GPT for text, GPT-4V for images, DALL·E for generation, Whisper for audio); each modality requires different preprocessing and error handling
- Feishu Rich Card interactive message format — Responses are formatted as interactive cards (buttons, dropdowns, etc.) not plain text; designing UX within card schema (
handlers/card_*) requires understanding Feishu's JSON card DSL - Context window management in multi-turn conversation — GPT has token limits; the system must track cumulative tokens in
msgCacheand decide when to truncate history to stay under limits without breaking conversation coherence
🔗Related repos
sashabaranov/go-openai— Upstream dependency providing the Go OpenAI client wrapper; understanding its API design is critical for extendingservices/openai/featureslarksuite/oapi-sdk-go— Official Feishu OpenAPI SDK used for all bot-Feishu communication; source of truth for webhook formats, card schemas, and event structurescontinuum/feishu-bot-template— Alternative Feishu bot scaffold in Go; reference for simpler webhook setup if this project's architecture feels over-engineered for your use caseopenai/openai-go— Official OpenAI Go SDK (newer than sashabaranov/go-openai); this project could migrate to it for better long-term maintainability and official supportConnectAI-E/Feishu-OpenAI— Commercial fork mentioned in README offering admin panel, SaaS deployment, and enterprise features (Midjourney, resource management) on top of this codebase
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add comprehensive unit tests for OpenAI service layer (code/services/openai/)
The OpenAI service module handles critical functionality (GPT3, Vision, Audio, Picture generation) but only has a single test file (gpt3_test.go). Current test coverage is incomplete - missing tests for vision.go, audio.go, picture.go, billing.go, and stream.go. This is high-value because these services are core to the application and new contributors can safely add tests without breaking existing logic.
- [ ] Create code/services/openai/vision_test.go with test cases for image analysis functionality
- [ ] Create code/services/openai/audio_test.go covering audio transcription with different formats (WAV, OGG)
- [ ] Create code/services/openai/picture_test.go for DALL-E-3 image generation tests
- [ ] Expand code/services/openai/gpt3_test.go to cover streaming responses in stream.go
- [ ] Add mock OpenAI client responses using interfaces to avoid external API calls
- [ ] Document test coverage targets in contributing guidelines
Add handler integration tests for message and card action flows (code/handlers/)
The handlers directory contains 12 handler files managing message routing, card interactions, and AI mode switching, but there are no integration tests. The current structure (event_msg_action.go, card_ai_mode_action.go, card_role_action.go, etc.) processes Feishu events but lacks test coverage. This is valuable for catching regressions in user-facing functionality like role switching, message handling, and vision/audio actions.
- [ ] Create code/handlers/handler_test.go with mock Feishu event fixtures
- [ ] Add tests for message flow: event_msg_action.go → OpenAI service → response
- [ ] Add tests for card actions: card_ai_mode_action.go, card_role_action.go, card_vision_action.go state transitions
- [ ] Create integration test for audio processing: event_audio_action.go with test_file/test.wav
- [ ] Mock the Feishu client (initialization/lark_client.go) for isolated handler testing
- [ ] Document expected message/card schemas in test fixtures
Add missing test coverage for utility functions and cache services (code/utils/ and code/services/)
The codebase has utility functions (code/utils/strings.go has tests, but code/utils/audio/ogg.go and code/utils/audio/wav.go lack tests) and cache services (msgCache.go, sessionCache.go) with no test coverage. Audio format conversion is critical for the Whisper integration, and cache logic affects message continuity. This is low-risk and helps document expected behavior.
- [ ] Expand code/utils/strings_test.go or create additional test cases for edge cases
- [ ] Create code/utils/audio/ogg_test.go and code/utils/audio/wav_test.go with sample audio files from test_file/
- [ ] Create code/services/msgCache_test.go testing cache hit/miss patterns and TTL behavior
- [ ] Create code/services/sessionCache_test.go for session isolation and concurrent access
- [ ] Add tests for loadbalancer logic in code/services/loadbalancer/loadbalancer.go
- [ ] Document cache behavior expectations (session scope, message history limits)
🌿Good first issues
- Add unit tests for
services/openai/gpt3_test.go—file exists but testing framework appears minimal. Expand to cover stream parsing (stream.go), token counting, and error handling for OpenAI API failures. - Document the card callback action flow: trace how a user clicks a button on a Feishu card → which handler processes it → how response is posted back. Currently unclear from handler naming (
card_common_action.govscard_ai_mode_action.go). Add flow diagram toREADME. - Implement missing error responses for rate-limited OpenAI requests:
services/openai/common.golikely needs explicit handling for 429 responses instead of generic error passthrough. Add user-friendly message to Feishu when quota exceeded. - Extract hardcoded config defaults from
initialization/config.gointoconfig.example.yaml: ensure all possible env vars are documented with examples, making first-time setup self-service. - Add integration test for voice workflow:
event_audio_action.go→ Whisper API → GPT → Feishu response. Currently no test coverage path visible for multi-step pipelines.
⭐Top contributors
Click to expand
Top contributors
- @Leizhenpeng — 70 commits
- @DDMeaqua — 21 commits
- @xmoxmo — 5 commits
- @wenerme — 1 commits
- @hehan-wang — 1 commits
📝Recent commits
Click to expand
Recent commits
c5446a3— Update readme.md (Leizhenpeng)f58d686— Update readme.md (Leizhenpeng)02d2f64— Update readme.md (Leizhenpeng)c5800bf— Update readme.md (Leizhenpeng)5082281— Update readme.md (DDMeaqua)774a49d— Update readme.md (Leizhenpeng)e1eaf94— Update readme.md (Leizhenpeng)83d2a33— Update readme.md (Leizhenpeng)8737df6— Update readme.md (Leizhenpeng)79dd756— Merge pull request #265 from ConnectAI-E/support_vision (Leizhenpeng)
🔒Security observations
Failed to generate security analysis.
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.