LFM2.5 is the next generation of ultra-efficient, multimodal foundation models from Liquid AI. Built on liquid neural network architecture, these models are optimized for low-latency, privacy-critical, and secure on-device deployment across CPUs, GPUs, and NPUs. They deliver powerful AI capabilities directly on consumer electronics, wearables, automotive systems, and more, without relying on the cloud.
Freemium
How to use LFM2.5?
Developers and businesses can use LFM2.5 models through the LEAP platform to build, customize, and deploy efficient AI applications. Start by downloading models from Hugging Face or using the LEAP workflow. Fine-tune them for specific tasks like fraud detection, product cataloging, or synthetic video generation. Finally, deploy the specialized models directly onto target devices—from smartphones to car computers—for real-time, private, and low-latency intelligence.
LFM2.5 's Core Features
Liquid Neural Network Architecture: A revolutionary design inspired by dynamical systems for superior reasoning and flexibility when processing complex, sequential, and multimodal data.
Ultra-Efficient On-Device Deployment: Models are purpose-built to run seamlessly on any hardware, including wearables, phones, laptops, and cars, using CPUs, GPUs, or NPUs with minimal compute.
Multimodal Capabilities: The LFM2 family includes specialized models for text, audio, vision, and vision-language tasks, allowing for a wide range of AI applications.
LEAP Development Platform: A comprehensive tool for developers to customize, optimize, and deploy LFMs in a single workflow, making on-device AI accessible to all skill levels.
Apollo Mobile App: A free, lightweight application that allows users to directly 'vibe check' and interact with small language models on their personal phones.
Open Source Foundation: Core models and tools are freely available for self-service use, fostering innovation and access within the developer community.
Enterprise-Grade Customization: Offers full-scale, tailored AI solutions for businesses, including model architecture search and deployment support for specific latency, privacy, and security needs.
LFM2.5 's Use Cases
Financial Services Analysts: Detect and prevent fraudulent transactions in real-time by deploying low-latency models directly on banking servers, enhancing security and saving millions annually.
E-commerce Platform Managers: Optimize product cataloging and search by using efficient vision-language models on their servers, improving customer experience without cloud bottlenecks.
Automotive Engineers: Accelerate the deployment of AI for advanced driver-assistance systems (ADAS) by running vision models on the car's own computer for instant, reliable responses.
Consumer Electronics Developers: Power custom translation and voice assistant features on smartphones and smart devices, ensuring user privacy and functionality without an internet connection.
Startup Founders: Build a competitive moat by specializing LFMs for their unique service, gaining tailored guidance from Liquid's engineering team to deploy cost-effective AI.
Industrial Roboticists: Enable embedded autonomy in robots by deploying efficient models that allow for real-time perception and decision-making at the edge, away from central servers.
Healthcare Researchers: Process sensitive patient data on-premise with privacy-guaranteed models for tasks like medical imaging analysis, complying with strict data regulations.
LFM2.5 's Pricing
Open Source / Everyone
Free
Free access to foundation models, LEAP platform, and Apollo app for non-commercial use and qualifying small businesses.
Startup Program
Starting from Free
Tailored support and access to full tech stack for eligible startups in exchange for feedback and partnership.
Enterprise Solutions
Custom Pricing
Fully customized, white-glove AI solutions tailored to specific business needs, hardware, and data.