Cerebras Systems | LinkedIn (original) (raw)
``
Computer Hardware
Sunnyvale, California 36,685 followers
AI insights, faster! We're a computer systems company dedicated to accelerating deep learning.
About us
Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, functional business experts and engineers of all types. We have come together to build a new class of computer to accelerate artificial intelligence work by three orders of magnitude beyond the current state of the art. The CS-2 is the fastest AI computer in existence. It contains a collection of industry firsts, including the Cerebras Wafer Scale Engine (WSE-2). The WSE-2 is the largest chip ever built. It contains 2.6 trillion transistors and covers more than 46,225 square millimeters of silicon. The largest graphics processor on the market has 54 billion transistors and covers 815 square millimeters. In artificial intelligence work, large chips process information more quickly producing answers in less time. As a result, neural networks that in the past took months to train, can now train in minutes on the Cerebras CS-2 powered by the WSE-2. Join us: https://cerebras.net/careers/
Industry
Computer Hardware
Company size
201-500 employees
Headquarters
Sunnyvale, California
Type
Privately Held
Founded
2016
Specialties
artificial intelligence, deep learning, and natural language processing
Products
Locations
- Primary
1237 E Arques Ave
Sunnyvale, California 94085, US
Get directions - 150 King St W
Toronto, Ontario M5H 1J9, CA
Get directions - 10188 Telesis Ct.
Ste. 550
San Diego, California CA 92121, US
Get directions - Tokyo, JP
Get directions - Bangalore, IN
Get directions
Employees at Cerebras Systems
Updates
-
36,685 followers
1mo Edited
Meet Cerebras Inference – the fastest inference for generative AI! 🏎️ Speed: 1,800 tokens/sec for Llama 3.1-8B and 450 tokens/sec for Llama 3.1-70B, 20x faster than NVIDIA GPU-based hyperscale clouds. 💸 Price: Cerebras Inference offers the industry’s best price-performance at 10c per million tokens for Llama 3.1-8B and 60c per million tokens for Llama-3.1 70B. 🎯 Accuracy: Cerebras Inference uses native 16-bit weights for all models, ensuring the highest accuracy responses. 🔓 Access: Cerebras Inference is open to everyone today via chat and API access. All powered by our third-generation Wafer Scale Engine (WSE-3). Try it now 👉 https://lnkd.in/gEJJ2pfYPress Release: https://lnkd.in/gtF5fxHtBlog: https://lnkd.in/gZ46q4cD - Major performance update: Llama3.1-70B now runs at 560 tokens/s! 24% faster in 3 weeks Available now on Cerebras Inference API and chathttps://lnkd.in/dTF-6yGP
- Cerebras Inference perf update: Llama3.1-8B: 1800 ➡️ 1927 tokens/s Llama3.1-70B: 450 ➡️ 481 tokens/s Still 🥇 for the most popular open model in the world
-
36,685 followers
2w Edited
🚀 Excited to share the development of NANDA, a cutting-edge Hindi Large Language Model, created with our partners G42, Inception, Core42, and MBZUAI (Mohamed bin Zayed University of Artificial Intelligence). NANDA was trained on Condor Galaxy, one of the world’s most powerful AI supercomputers, built by G42 and Cerebras. NANDA details: 🧠 13-billion parameter model 🗂️ Trained on 2.13 trillion tokens, with a specific focus on Hindi 🎯 Optimized for Hindi, Hinglish, and English Learn more: https://lnkd.in/dFs3BqVq - 📢 Cerebras Systems has signed a Memorandum of understanding (MoU) with Saudi Aramco With Cerebras’ CS-3 systems, Aramco will build, train, and deploy large language models to support local AI initiatives. Dr. Nabil Al Nuaim, Aramco SVP of Digital & Information Technology, stated: “This MoU with Cerebras aims to accelerate our abilities to develop an AI-powered digital innovation economy in Saudi Arabia by helping to support the integration of advanced AI solutions, unlocking new opportunities for the country and localizing cutting-edge technologies with regional expertise.” Learn more here: https://lnkd.in/gBn5EFsG
- Join Cerebras Senior Applied ML Scientist Gurpreet Gosal as he participates in a panel discussion on "Revolutionizing AI: Exploring the Next Frontier of Large Language Models (LLMs)." 📅 Tuesday, September 10th 🕒 12 PM GMT+4 🔗 Register here: https://lnkd.in/gwZi-X3m
- This is what high speed inference looks like…
You can generate and iterate on UI instantly with Cerebras Inference ⚡ OP: https://lnkd.in/gVkpzbHz
Join now to see what you are missing
Similar pages
- SambaNova Systems Computer Hardware Manufacturing Palo Alto, CA
- Groq Semiconductor Manufacturing Mountain View, California
- Tenstorrent Computer Hardware Manufacturing Toronto, ON
- Graphcore Semiconductor Manufacturing Bristol, Bristol
- SiFive Semiconductor Manufacturing Santa Clara, California
- Astera Labs Semiconductor Manufacturing Santa Clara, CA
- Rivos Inc. Computer Hardware Manufacturing Santa Clara, CA
- Ampere Semiconductor Manufacturing Santa Clara, CA
- Lightmatter Computer Hardware Manufacturing Mountain View, California
- d-Matrix Semiconductor Manufacturing Santa Clara, California
Browse jobs
- Engineer jobs 608,159 open jobs
- Machine Learning Engineer jobs 183,664 open jobs
- Scientist jobs 59,545 open jobs
- Intern jobs 48,214 open jobs
- Design Engineer jobs 145,170 open jobs
- Mechanical Engineer jobs 58,544 open jobs
- Sales Recruiter jobs 13,948 open jobs
- Marketing Director jobs 73,456 open jobs
- Modeling Engineer jobs 444,911 open jobs
- Account Executive jobs 87,726 open jobs
- Manager jobs 2,003,890 open jobs
- Welding Engineer jobs 34,881 open jobs
- Data Scientist jobs 497,576 open jobs
- Analyst jobs 760,055 open jobs
- Ecommerce Marketing Manager jobs 12,954 open jobs
- Mechanical Designer jobs 48,883 open jobs
- Head of Operations jobs 11,970 open jobs
- Validation Engineer jobs 453,539 open jobs
- Test Architect jobs 23,700 open jobs
- Technical Analyst jobs 506,656 open jobs