Packages
4763 resultsReflex
Full-stack Python web framework that compiles to React without writing JavaScript. Reflex uses Python classes to define UI components, state management, and event handlers — compiling to a React frontend with a FastAPI backend. Ideal for data scientists and AI engineers who want to build interactive web applications without learning JavaScript/React. Built-in real-time state sync between frontend and backend. Deploy locally or to Reflex Cloud.
Slick
Functional Relational Mapping (FRM) library for Scala — the standard Scala database access layer. Slick maps Scala collections operations (filter, map, flatMap, join) to SQL queries at compile time, providing type-safe database queries without raw SQL strings. Supports PostgreSQL, MySQL, SQLite, H2, and Oracle via JDBC. Key features: type-safe queries via Slick's lifted embedding DSL, async non-blocking database access via Scala Futures, plain SQL support (sql"SELECT..."), and schema code generation from existing databases. Alternative to Doobie (more functional) and Quill (macro-based).
Treasury Prime Banking-as-a-Service API
Treasury Prime banking-as-a-service REST API for fintech companies and software platforms to embed banking products (accounts, payments, cards) through a network of FDIC-insured bank partners. Enables AI agents to manage bank account creation and KYC onboarding for embedded banking product automation, handle ACH origination and receipt for payment product automation, access debit card issuance and management for card product automation, retrieve real-time transaction and balance monitoring for account management automation, manage sub-accounts and account hierarchy for financial product structure automation, handle push-to-card for instant disbursement automation, access wire transfer for high-value payment automation, retrieve spend controls and card authorization rules for card program automation, manage bank partner selection and redundancy for multi-bank BaaS resilience automation, and integrate Treasury Prime with lending platforms, marketplaces, and software companies for embedded banking product development.
llama.cpp / llama-cpp-python
Low-level C++ LLM inference engine with Python bindings (llama-cpp-python) that runs GGUF-format quantized models locally with fine-grained control over context, sampling, and constrained generation.
visx
Collection of low-level React visualization primitives from Airbnb built on D3. Unlike high-level chart libraries, visx provides the building blocks (scales, shapes, axes, gradients, patterns, interactions) as React components without opinionated chart layouts. Used when you need the power of D3 with the React component model. Powers complex custom visualizations at Airbnb scale.
12306 MCP
An MCP server that enables AI models to search Chinese railway (12306) train tickets, query station stops, filter train info, and find multi-leg transfer routes.
AWS Sample Serverless MCP Servers
Collection of reference implementations showing how to deploy MCP servers on AWS serverless infrastructure (Lambda, ECS) with both stateless and stateful patterns, including AI agent examples using the Strands SDK.
AssemblyAI API
AssemblyAI transcribes audio files with high accuracy and offers LeMUR — an LLM layer over transcripts — for summarization, Q&A, and structured data extraction in one API call.
Bank API Reference Implementation
A compliance-focused reference implementation of a banking API built with ASP.NET Core 10.0. Includes REST API with OpenAPI spec, MCP server for AI integration, JWT/OIDC authentication, rate limiting, and validation against OWASP API Security Top 10, GDPR, and CCPA.
ClickHouse Cloud API
ClickHouse Cloud API — managed ClickHouse columnar database for real-time analytics with sub-second queries on billions of rows, accessible via REST HTTP interface or native protocol.
Context Engine
An open-core, self-improving code search platform that indexes codebases into vector embeddings and exposes semantic search via MCP servers. Uses ONNX embeddings, Qdrant vector DB, Redis cache, and an adaptive reranking system that learns from usage patterns. Provides two MCP endpoints: a memory server and an indexer server.
Depot
Remote Docker build acceleration service. Depot replaces `docker build` with `depot build` and runs builds on fast, persistent-cache remote machines with native ARM64 support. Typical result: 10-40x faster Docker builds due to persistent layer caches across builds and team members. Drop-in replacement for Docker BuildKit — no Dockerfile changes required. REST API and GitHub Actions integration for CI pipeline acceleration.
Dropbox Video Download MCP Server
MCP server for connecting video workflows to Dropbox cloud storage. Enables AI agents to organize and access video assets stored in Dropbox — downloading videos, managing video file organization, and integrating Dropbox video storage into AI-driven media processing workflows.
Fern
SDK and documentation generator from OpenAPI or Fern's own IDL. Fern generates idiomatic, production-quality SDKs in TypeScript, Python, Java, Go, Ruby, and C# from your API definition. Unlike generic code generators, Fern's output follows language idioms (dataclasses in Python, interfaces in TypeScript) rather than generic scaffolding. Used by Anthropic, ElevenLabs, Cohere, and others for their official SDKs. Also generates interactive API reference documentation.
Google BigQuery API
Google BigQuery serverless data warehouse API — run SQL analytical queries on petabyte-scale datasets with automatic scaling, columnar storage, and ML features built in.
Google Cloud Pub/Sub API
Google Cloud Pub/Sub is a fully managed real-time messaging service for event ingestion and delivery — decouples event producers from consumers with at-least-once delivery, replay, and fan-out capabilities.
Grafana Agent
Lightweight, OpenTelemetry-compatible telemetry collector that scrapes metrics, tails logs, and collects traces, forwarding all three signals to Grafana Cloud or self-hosted backends.
Hibernate Reactive
Reactive programming model for Hibernate ORM — replaces JDBC with non-blocking Vert.x database drivers (PostgreSQL, MySQL, MariaDB, DB2, SQL Server, CockroachDB). Returns Mutiny Uni<T>/Multi<T> (or CompletionStage for project reactor) instead of blocking results. Hibernate Reactive enables JPA-style entity mapping with reactive execution — same @Entity, @ManyToOne, @OneToMany annotations, but queries return reactive types. First-class Quarkus integration via Panache Reactive (extends PanacheEntity with Uni<> return types). Designed for Quarkus reactive web stacks where blocking database I/O would block the event loop.
HuggingFace Text Generation Inference
Production-grade LLM inference server from HuggingFace. TGI provides OpenAI-compatible REST API for serving open-source LLMs (Llama, Mistral, Gemma, Falcon, etc.) with continuous batching, PagedAttention for memory efficiency, quantization (GPTQ, AWQ, EETQ), streaming tokens, and multi-GPU tensor parallelism. Powers HuggingFace's Inference Endpoints and is the reference serving solution for the HuggingFace ecosystem. Used when deploying open-source LLMs in production.
IAM Policy Autopilot
Open-source static code analysis tool that generates baseline AWS IAM policies by analyzing Python, Go, and TypeScript application code for AWS SDK calls. Also debugs AccessDenied errors by synthesizing targeted policy fixes. Works as both CLI and MCP server for AI coding assistant integration.