michabbb on Nostr: š¦ #LlamaStack: Standardizing #GenerativeAI Development Defines open API specs for ...
š¦ #LlamaStack: Standardizing #GenerativeAI Development
Defines open API specs for #AI application building blocks
Covers full lifecycle: model training, evaluation, production deployment
Includes APIs for inference, safety, memory, agents, and more
Supports multiple environments: local, hosted, and on-device
š ļø Features:
#OpenSource API providers and distributions
Mix-and-match capabilities (e.g., local small models, cloud-based large models)
Consistent APIs across platforms (server, mobile, etc.)
š¤ Supported implementations:
API Providers: #Meta Reference, #Fireworks, #AWS Bedrock, #Together, #Ollama, TGI, #Chroma, PG Vector, #PyTorch ExecuTorch
Distributions: Meta Reference, Dell-TGI
š¦ Easy installation via pip or from source š„ļø Includes 'llama' CLI for managing distributions, models, and more
Learn more: https://github.com/meta-llama/llama-stack
Defines open API specs for #AI application building blocks
Covers full lifecycle: model training, evaluation, production deployment
Includes APIs for inference, safety, memory, agents, and more
Supports multiple environments: local, hosted, and on-device
š ļø Features:
#OpenSource API providers and distributions
Mix-and-match capabilities (e.g., local small models, cloud-based large models)
Consistent APIs across platforms (server, mobile, etc.)
š¤ Supported implementations:
API Providers: #Meta Reference, #Fireworks, #AWS Bedrock, #Together, #Ollama, TGI, #Chroma, PG Vector, #PyTorch ExecuTorch
Distributions: Meta Reference, Dell-TGI
š¦ Easy installation via pip or from source š„ļø Includes 'llama' CLI for managing distributions, models, and more
Learn more: https://github.com/meta-llama/llama-stack