feat(p2p): Federation and AI swarms by mudler · Pull Request #2723 · mudler/LocalAI (original) (raw)

…9.1 by renovate (#24152)

This PR contains the following updates:

Package Update Change
docker.io/localai/localai
minor v2.17.1-aio-cpu -> v2.19.1-aio-cpu
docker.io/localai/localai
minor v2.17.1-aio-gpu-nvidia-cuda-11 ->
v2.19.1-aio-gpu-nvidia-cuda-11
docker.io/localai/localai
minor v2.17.1-aio-gpu-nvidia-cuda-12 ->
v2.19.1-aio-gpu-nvidia-cuda-12
docker.io/localai/localai
minor v2.17.1-cublas-cuda11-ffmpeg-core ->
v2.19.1-cublas-cuda11-ffmpeg-core
docker.io/localai/localai
minor v2.17.1-cublas-cuda11-core -> v2.19.1-cublas-cuda11-core
docker.io/localai/localai
minor v2.17.1-cublas-cuda12-ffmpeg-core ->
v2.19.1-cublas-cuda12-ffmpeg-core
docker.io/localai/localai
minor v2.17.1-cublas-cuda12-core -> v2.19.1-cublas-cuda12-core
docker.io/localai/localai
minor v2.17.1-ffmpeg-core -> v2.19.1-ffmpeg-core
docker.io/localai/localai
minor v2.17.1 -> v2.19.1

[!WARNING] Some dependencies could not be looked up. Check the Dependency Dashboard for more information.


Release Notes

mudler/LocalAI (docker.io/localai/localai)

v2.19.1

Compare Source

local-ai-release-219-shadow

LocalAI 2.19.1 is out! :mega:
TLDR; Summary spotlight
🖧 LocalAI Federation and AI swarms

LocalAI is revolutionizing the future of distributed AI workloads by making it simpler and more accessible. No more complex setups, Docker or Kubernetes configurations – LocalAI allows you to create your own AI cluster with minimal friction. By auto-discovering and sharing work or weights of the LLM model across your existing devices, LocalAI aims to scale both horizontally and vertically with ease.

How it works?

Starting LocalAI with --p2p generates a shared token for connecting multiple instances: and that's all you need to create AI clusters, eliminating the need for intricate network setups. Simply navigate to the "Swarm" section in the WebUI and follow the on-screen instructions.

For fully shared instances, initiate LocalAI with --p2p --federated and adhere to the Swarm section's guidance. This feature, while still experimental, offers a tech preview quality experience.

Federated LocalAI

Launch multiple LocalAI instances and cluster them together to share requests across the cluster. The "Swarm" tab in the WebUI provides one-liner instructions on connecting various LocalAI instances using a shared token. Instances will auto-discover each other, even across different networks.

346663124-1d2324fd-8b55-4fa2-9856-721a467969c2

Check out a demonstration video: Watch now

LocalAI P2P Workers

Distribute weights across nodes by starting multiple LocalAI workers, currently available only on the llama.cpp backend, with plans to expand to other backends soon.

346663124-1d2324fd-8b55-4fa2-9856-721a467969c2

Check out a demonstration video: Watch now

What's Changed
Bug fixes 🐛
🖧 P2P area
Exciting New Features 🎉
🧠 Models
📖 Documentation and examples
👒 Dependencies
Other Changes
New Contributors

Full Changelog: https://github.com/mudler/LocalAI/compare/v2.18.1...v2.19.0

v2.19.0

Compare Source

local-ai-release-219-shadow

LocalAI 2.19.0 is out! :mega:
TLDR; Summary spotlight
🖧 LocalAI Federation and AI swarms

LocalAI is revolutionizing the future of distributed AI workloads by making it simpler and more accessible. No more complex setups, Docker or Kubernetes configurations – LocalAI allows you to create your own AI cluster with minimal friction. By auto-discovering and sharing work or weights of the LLM model across your existing devices, LocalAI aims to scale both horizontally and vertically with ease.

How it works?

Starting LocalAI with --p2p generates a shared token for connecting multiple instances: and that's all you need to create AI clusters, eliminating the need for intricate network setups. Simply navigate to the "Swarm" section in the WebUI and follow the on-screen instructions.

For fully shared instances, initiate LocalAI with --p2p --federated and adhere to the Swarm section's guidance. This feature, while still experimental, offers a tech preview quality experience.

Federated LocalAI

Launch multiple LocalAI instances and cluster them together to share requests across the cluster. The "Swarm" tab in the WebUI provides one-liner instructions on connecting various LocalAI instances using a shared token. Instances will auto-discover each other, even across different networks.

346663124-1d2324fd-8b55-4fa2-9856-721a467969c2

Check out a demonstration video: Watch now

LocalAI P2P Workers

Distribute weights across nodes by starting multiple LocalAI workers, currently available only on the llama.cpp backend, with plans to expand to other backends soon.

346663124-1d2324fd-8b55-4fa2-9856-721a467969c2

Check out a demonstration video: Watch now

What's Changed
Bug fixes 🐛
🖧 P2P area
Exciting New Features 🎉
🧠 Models

Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Enabled.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about these updates again.



This PR has been generated by Renovate Bot.