Arindam200/awesome-ai-apps
A collection of projects showcasing RAG, agents, workflows, and other AI use cases
Healthy across the board
weakest axisPermissive license, no critical CVEs, actively maintained — safe to depend on.
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓Last commit 1w ago
- ✓11 active contributors
- ✓Distributed ownership (top contributor 39% of recent commits)
Show all 6 evidence items →Show less
- ✓MIT licensed
- ✓CI configured
- ✓Tests present
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Healthy" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/arindam200/awesome-ai-apps)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/arindam200/awesome-ai-apps on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: Arindam200/awesome-ai-apps
Generated by RepoPilot · 2026-05-07 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/Arindam200/awesome-ai-apps shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
GO — Healthy across the board
- Last commit 1w ago
- 11 active contributors
- Distributed ownership (top contributor 39% of recent commits)
- MIT licensed
- CI configured
- Tests present
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live Arindam200/awesome-ai-apps
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/Arindam200/awesome-ai-apps.
What it runs against: a local clone of Arindam200/awesome-ai-apps — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in Arindam200/awesome-ai-apps | Confirms the artifact applies here, not a fork |
| 2 | License is still MIT | Catches relicense before you depend on it |
| 3 | Default branch main exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 37 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of Arindam200/awesome-ai-apps. If you don't
# have one yet, run these first:
#
# git clone https://github.com/Arindam200/awesome-ai-apps.git
# cd awesome-ai-apps
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of Arindam200/awesome-ai-apps and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "Arindam200/awesome-ai-apps(\\.git)?\\b" \\
&& ok "origin remote is Arindam200/awesome-ai-apps" \\
|| miss "origin remote is not Arindam200/awesome-ai-apps (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
&& ok "license is MIT" \\
|| miss "license drift — was MIT at generation time"
# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
&& ok "default branch main exists" \\
|| miss "default branch main no longer exists"
# 4. Critical files exist
test -f "README.md" \\
&& ok "README.md" \\
|| miss "missing critical file: README.md"
test -f "CONTRIBUTING.md" \\
&& ok "CONTRIBUTING.md" \\
|| miss "missing critical file: CONTRIBUTING.md"
test -f "advance_ai_agents/ai-hedgefund/package.json" \\
&& ok "advance_ai_agents/ai-hedgefund/package.json" \\
|| miss "missing critical file: advance_ai_agents/ai-hedgefund/package.json"
test -f "advance_ai_agents/agentfield_finance_research_agent/src/main.py" \\
&& ok "advance_ai_agents/agentfield_finance_research_agent/src/main.py" \\
|| miss "missing critical file: advance_ai_agents/agentfield_finance_research_agent/src/main.py"
test -f ".github/workflows/lint.yml" \\
&& ok ".github/workflows/lint.yml" \\
|| miss "missing critical file: .github/workflows/lint.yml"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 37 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~7d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/Arindam200/awesome-ai-apps"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
Awesome AI Apps is a curated collection of 80+ production-ready examples and tutorials for building LLM-powered applications using frameworks like LangChain, CrewAI, and AutoGen. It demonstrates concrete patterns for RAG systems, multi-agent workflows, voice assistants, MCP-backed tools, and memory-augmented agents across Python, TypeScript, and Jupyter notebooks, with working code you can run immediately. Multi-project monorepo organized by AI pattern category: advance_ai_agents/ contains multi-agent workflows (agentfield_finance_research_agent with Python src/ + HTML UI, ai-hedgefund with TypeScript services and Docker), separate directories for RAG, voice, and starter examples. Each subdirectory is self-contained with its own requirements.txt, Makefile, and README.
👥Who it's for
AI/ML engineers and full-stack developers building with LLMs who need battle-tested recipes and reference implementations. Specifically: those integrating RAG pipelines, orchestrating agent teams, adding voice interfaces, or learning to structure production AI applications beyond toy examples.
🌱Maturity & risk
Actively maintained with 80+ examples, organized in clear subdirectories (advance_ai_agents/, rag_applications/, voice_agents/, etc.), includes GitHub Actions CI/lint pipeline, and has comprehensive issue templates and contribution guidelines. Not a single-file project but a living collection—commit recency and test coverage vary per subdirectory, making it a reference library rather than a framework to depend on as a core dependency.
Low risk as a reference repo since you fork/copy examples rather than depend on it as a library. However, individual examples may have pinned dependencies (yfinance, openai, agentfield visible in requirements) that drift—verify compatibility when adapting. No monolithic release cycle means you cherry-pick what you need; single-maintainer (Arindam200) concentration risk for the curation itself, not the code.
Active areas of work
Primary activity is expanding example coverage across emerging frameworks (recent additions like ai-hedgefund with Nebius AI integration visible in config/). GitHub issue templates and PR template active suggest ongoing contributions. Dependabot configured for security updates.
🚀Get running
git clone https://github.com/Arindam200/awesome-ai-apps.git
cd awesome-ai-apps
cd advance_ai_agents/agentfield_finance_research_agent
pip install -r requirements.txt
cp .env.example .env
# Configure OpenAI API key in .env
make run
Daily commands:
Per-project variation. For agentfield_finance_research_agent: make run executes src/main.py. For ai-hedgefund (TypeScript): bun install && bun dev. Check individual Makefile or package.json. Most Python projects: python src/main.py after pip install -r requirements.txt.
🗺️Map of the codebase
README.md— Master index of 80+ AI applications across RAG, agents, workflows, and voice—every contributor must understand the repo's scope and project categoriesCONTRIBUTING.md— Defines contribution guidelines, project structure conventions, and submission requirements that all new agents and apps must followadvance_ai_agents/ai-hedgefund/package.json— Primary TypeScript/Node.js workflow engine demonstrating multi-step agent orchestration and parallel analysis—the most complex production example in the repoadvance_ai_agents/agentfield_finance_research_agent/src/main.py— Core Python agent entry point showcasing skills, reasoners, and schemas—essential reference for agent architecture patterns.github/workflows/lint.yml— CI/CD pipeline that enforces code quality standards across all 600 files and multiple language ecosystems.github/PULL_REQUEST_TEMPLATE.md— Standardized PR structure that all contributors must use, ensuring consistent documentation and testing across agent submissions
🛠️How to make changes
Add a New Advanced Agent (e.g., finance research, job matching)
- Create agent directory under advance_ai_agents/{your_agent_name}/ with README.md, requirements.txt, and src/ folder (
advance_ai_agents/agentfield_finance_research_agent/README.md) - Define agent skills (tools) using OpenAI-compatible or custom tool definitions (
advance_ai_agents/agentfield_finance_research_agent/src/skills.py) - Implement reasoning loop that chains LLM calls with tool use (
advance_ai_agents/agentfield_finance_research_agent/src/reasoners.py) - Define request/response schemas for validation (
advance_ai_agents/agentfield_finance_research_agent/src/schemas.py) - Create main.py entry point and add HTTP endpoint or CLI interface (
advance_ai_agents/agentfield_finance_research_agent/src/main.py) - Submit PR with README, demo output, and agent description following template (
.github/PULL_REQUEST_TEMPLATE.md)
Add a New Workflow Step (TypeScript-based agents)
- Create .step.ts file in steps/{category}/ directory following naming convention (
advance_ai_agents/ai-hedgefund/steps/general/finance-data.step.ts) - Implement step interface with input/output types and execution logic (
advance_ai_agents/ai-hedgefund/steps/parallel/fundamental-analyst.step.ts) - Register step in workflow orchestrator (StateService or coordinator) (
advance_ai_agents/ai-hedgefund/services/utils/StateService.ts) - Optionally create .step.tsx React component for UI rendering (
advance_ai_agents/ai-hedgefund/steps/noop/00-noop.step.tsx)
Add External Service Integration
- Create new service file in services/ folder (e.g., services/CustomApiService.ts) (
advance_ai_agents/ai-hedgefund/services/nebius-ai/NebiusAIService.ts) - Implement service class with methods wrapping API calls and error handling (
advance_ai_agents/ai-hedgefund/services/nebius-ai/FinanceDataService.ts) - Register service in ServiceFactory for dependency injection (
advance_ai_agents/ai-hedgefund/services/utils/ServiceFactory.ts) - Add environment variable docs to .env.example (
advance_ai_agents/ai-hedgefund/env.example)
🔧Why these technologies
- TypeScript + Node.js (ai-hedgefund) — Enables fast iteration on complex multi-step workflows, good debugging of async orchestration, and server deployment to production cloud environments
- Python + Pydantic (agentfield, candidate_analyser, car_finder_agent) — Rapid prototyping of agent logic, native ML/data science library ecosystem, and easier integration with research LLM frameworks (LangChain, AutoGen)
- OpenAI API + Nebius AI — Production-grade LLM inference with function calling, structured outputs, and competitive cost/latency for financial and research use cases
- Workflow orchestration (custom steps + state — undefined
🪤Traps & gotchas
Individual examples require specific API keys (OpenAI, Nebius AI, etc.) in .env files—cloning alone won't work without configuration. ai-hedgefund requires Bun (not npm), not standard Node. Some examples pin specific model versions (e.g., gpt-4, gpt-3.5-turbo) that may change pricing or availability. Pydantic ≥2.0 requirement is strict (v1 models may break). No centralized test suite—each subdirectory test coverage varies wildly.
🏗️Architecture
💡Concepts to learn
- Retrieval-Augmented Generation (RAG) — Core pattern across rag_applications/ subdirectory—essential for grounding LLM outputs in external data without retraining; taught implicitly through working examples here
- Multi-Agent Orchestration — advance_ai_agents/ showcases teams of specialized agents coordinating via reasoners and skill sharing; understanding task decomposition and agent communication is central to this repo's philosophy
- Tool/Skill Binding — src/skills.py pattern (yfinance, APIs) demonstrates how to give agents external capabilities—critical for building agents that act on real data, not just converse
- Streaming & Server-Sent Events (SSE) — src/stream.py and TypeScript services show real-time token/event streaming from LLMs to frontends—essential UX pattern for responsive agent UIs visible in this codebase
- Pydantic v2 Schema Validation — Strict dependency (>=2.0) in requirements.txt; src/schemas.py uses Pydantic models for type-safe agent input/output contracts—no workaround for older versions in this repo
- Model Context Protocol (MCP) — Separate category of examples (🗂️ MCP Agents) showing standardized tool/resource interface for agents—emerging standard you'll encounter if extending examples
- Docker Multi-Service Orchestration — ai-hedgefund/docker-compose.yml demonstrates coordinating Python agents + TypeScript services + databases—practical for deploying examples beyond localhost
🔗Related repos
langchain-ai/langchain— Core LLM orchestration framework used throughout this repo's examples; essential dependency for understanding agent patterns herecrewai-framework/crewai— Alternative multi-agent framework referenced in examples; direct competitor approach to agent teamwork shown in this collectionvercel-labs/ai— TypeScript-first LLM SDK used in some examples (ai-hedgefund likely) for streaming and structured outputsmcpjs/mcp— Model Context Protocol ecosystem—this repo includes MCP agent examples (🗂️ MCP Agents category) that extend standard agent capabilitiesgetmoxy/moxy— Voice/audio agent framework overlapping with 🎙️ Voice Agents examples in this repo
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add Python dependency validation workflow for advance_ai_agents projects
Currently, the repo has a lint.yml workflow but no Python-specific CI/CD for the multiple Python projects in advance_ai_agents/. Projects like agentfield_finance_research_agent and others have requirements.txt files but no automated validation of dependencies, missing imports, or version conflicts. This would catch broken dependencies early and ensure all Python projects are installable.
- [ ] Create .github/workflows/python-validate.yml that runs on Python projects
- [ ] Add dependency check using pip-audit or similar for each requirements.txt in advance_ai_agents/*/
- [ ] Add pytest discovery and basic import validation for advance_ai_agents/*/src/ directories
- [ ] Test on agentfield_finance_research_agent/requirements.txt and any other Python projects
Add Node.js/TypeScript CI workflow for advance_ai_agents/ai-hedgefund
The ai-hedgefund project uses TypeScript (services/, steps/ directories) and has package.json with bun.lock, but there's no CI workflow validating builds, linting, or type checking. This is a significant project with multiple service files (NebiusAIService.ts, FinanceDataService.ts, etc.) that would benefit from automated type safety and build validation.
- [ ] Create .github/workflows/node-validate.yml targeting advance_ai_agents/ai-hedgefund/
- [ ] Add steps for: bun install, tsc --noEmit (TypeScript type checking), eslint on services/ and steps/ dirs
- [ ] Add build validation for TypeScript step files and service implementations
- [ ] Ensure workflow respects the Dockerfile and docker-compose.yml setup if applicable
Create project metadata validation and auto-indexing for README discovery
The repo contains 80+ projects but there's no automated system to validate that each project subdirectory follows expected structure (README.md, requirements.txt or package.json, proper .env.example, etc.). This would help maintain consistency and could auto-generate or validate the table of contents mentioned in README.md.
- [ ] Create .github/workflows/project-metadata-validate.yml
- [ ] Validate each top-level directory (advance_ai_agents//, etc.) has: README.md, LICENSE or inherits repo LICENSE, and appropriate config files (.env.example for Python, package.json/bun.lock for Node)
- [ ] Add check to ensure README.md exists and contains basic structure (description, setup, requirements)
- [ ] Report missing files as workflow warnings to catch incomplete project submissions
🌿Good first issues
- Add missing type hints to advance_ai_agents/agentfield_finance_research_agent/src/reasoners.py and src/stream.py—currently untyped Python that should be annotated for IDE support and contributor clarity.
- Create a unified Docker Compose example combining Python agent (agentfield_finance_research_agent) + TypeScript service (ai-hedgefund) to demonstrate multi-service orchestration, similar to ai-hedgefund/docker-compose.yml but cross-language.
- Expand .github/README_TEMPLATE.md with concrete examples of required .env variables, example output, and troubleshooting for each project category (RAG vs voice vs agents)—currently template is bare.
⭐Top contributors
Click to expand
Top contributors
- @Arindam200 — 39 commits
- @shivaylamba — 22 commits
- @Astrodevil — 11 commits
- @3rd-Son — 11 commits
- @axsaucedo — 6 commits
📝Recent commits
Click to expand
Recent commits
902fe3e— Add flowchart for PDF processing pipeline (shivaylamba)9e83afa— Add advanced RAG with reranking app (shivaycb)bd1dd08— add updated tests for self healing pipeline (shivaylamba)2cc9059— feat: add LiveKit voice agent with web search example (Arindam200)123dc65— docs: restructure fine_tuning README with per-project folder layout (Arindam200)de37315— feat: add new agent, RAG, fine-tuning, and voice examples (Arindam200)bcf6ce7— tokenfactory legal (shivaylamba)8bce125— add legal-tech (shivaylamba)47e3de2— feat: add healthcare contact center voice agent (Arindam200)26e7cf7— feat: split langchain_langgraph_starter into LangChain and LangGraph starters (Arindam200)
🔒Security observations
- High · Hardcoded Credentials in Environment Files —
advance_ai_agents/agentfield_finance_research_agent/.env.example, advance_ai_agents/ai-hedgefund/env.example, advance_ai_agents/car_finder_agent/api.env. Multiple .env.example and env.example files are present in the repository (advance_ai_agents/agentfield_finance_research_agent/.env.example, advance_ai_agents/ai-hedgefund/env.example, advance_ai_agents/car_finder_agent/api.env). While these are examples, developers may commit actual .env files with secrets. The presence of .gitignore files doesn't guarantee protection if misconfigured. Fix: Ensure .env files are properly listed in .gitignore. Use pre-commit hooks to prevent accidental commits of secrets. Consider using a secrets management tool like HashiCorp Vault, AWS Secrets Manager, or GitHub Secrets. - High · Outdated and Vulnerable Dependencies —
advance_ai_agents/agentfield_finance_research_agent/requirements.txt. The requirements.txt includes 'openai' without a pinned version. The 'pydantic>=2.0' allows any version 2.0 and above. Unpinned dependencies can introduce security vulnerabilities from transitive dependencies. Additionally, packages like 'yfinance' and 'agentfield' without version pinning pose supply chain risks. Fix: Pin all dependencies to specific versions (e.g., 'openai==1.3.0' instead of 'openai'). Use tools like 'pip-audit' or 'safety' to check for known vulnerabilities in dependencies. Implement automated dependency updates with Dependabot (already configured in .github/dependabot.yml but may need review). - High · Exposed Configuration Files —
advance_ai_agents/ai-hedgefund/config/, advance_ai_agents/ai-hedgefund/motia-workbench.json. Configuration files like advance_ai_agents/ai-hedgefund/config/company-mappings.json and advance_ai_agents/ai-hedgefund/motia-workbench.json may contain sensitive configuration data. If these contain API keys, endpoints, or other sensitive information, they pose a security risk. Fix: Review configuration files for sensitive data. Move sensitive configurations to environment variables. Use .gitignore to exclude config files containing secrets. Implement config validation to ensure no secrets are hardcoded. - Medium · Docker Image Build Without Security Scanning —
advance_ai_agents/ai-hedgefund/Dockerfile. The Dockerfile (advance_ai_agents/ai-hedgefund/Dockerfile) is present but there's no evidence of security scanning, image signing, or non-root user configuration in the visible structure. Docker images should be regularly scanned for vulnerabilities. Fix: Implement Docker image scanning using tools like Trivy, Snyk, or Docker Scout. Use a non-root user in the Dockerfile. Implement a minimal base image (alpine or distroless). Pin base image versions to specific versions rather than 'latest'. Scan images in CI/CD pipeline before deployment. - Medium · Missing Input Validation in TypeScript Services —
advance_ai_agents/ai-hedgefund/services/nebuis-ai/, advance_ai_agents/ai-hedgefund/services/utils/. Multiple TypeScript service files (NebiusAIService.ts, WebSearchService.ts, ConfigService.ts) lack visible input validation mechanisms. This could lead to injection attacks or unexpected behavior when processing external data from APIs. Fix: Implement strict input validation using libraries like 'joi', 'yup', or TypeScript's type system with runtime validation. Sanitize all external API responses. Implement proper error handling for malformed inputs. Use type guards and exhaustive checks. - Medium · Web Search Service Lacks Security Headers —
advance_ai_agents/ai-hedgefund/services/utils/WebSearchService.ts, advance_ai_agents/agentfield_finance_research_agent/ui/index.html. The WebSearchService.ts performs web searches and returns results without visible sanitization. Results could contain malicious scripts or content that gets rendered in the UI (index.html in agentfield_finance_research_agent/ui/). Fix: Implement Content Security Policy (CSP) headers
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.