Builders Fund | LinkedIn (original) (raw)
About us
Builders Fund invests in pre-seed/seed US and India-based SaaS companies with core competencies in security, AI, and cloud infrastructure. We closely partner with our portfolio enabling them to make the best technical and tactical decisions on Day 1 — from sourcing top talent to system design and writing code. We believe solving early infrastructure problems provides faster time to market and more product iteration cycles, a competitive edge enabling rapid growth and return on invested capital.
Industry
Venture Capital and Private Equity Principals
Company size
2-10 employees
Headquarters
San Francisco, California
Type
Privately Held
Founded
2022
Specialties
Artificial Intelligence, Security, Privacy, Cloud Infrastructure, and DevOps
Locations
Employees at Builders Fund
Updates
- Builders Fund reposted this
Enkrypt AI | LLM Security | LLM Privacy | Generative AI
1mo Edited
Team continues to deliver! Three papers have been accepted at NeurIPS 2024. Kudos to the team's constant innovation, demonstrating our commitment to advancing the field. For more details, check out our blogs here: https://lnkd.in/gXmwAttW1. Investigating Implicit Bias in Large Language Models: A Large-Scale Study of Over 50 LLMs This study explores implicit biases in over 50 large language models, revealing that model complexity does not always reduce bias. Venue: SafeGenAI Workshop 2. SAGE-RT: Synthetic Alignment data Generation for Safety Evaluation and Red Teaming Introducing SAGE-RT, a novel pipeline generating red-teaming data to improve LLM safety by tackling harmful prompts effectively. Venue: Red Teaming GenAI Workshop 3. Efficacy of the SAGE-RT Dataset for Model Safety Alignment: A Comparative Study This paper showcases how the SAGE-RT dataset enables superior model alignment, mitigating harmful responses with greater efficiency. Venue: Pluralistic-Alignment Workshop Congratulations to all the authors: Tanay Baswa Divyanshu Kumar, Umang Jain, Sahil Agarwal, Anurakt Kumar, Jatan Loya, Nitin A B Enkrypt AI - Builders Fund reposted this
Co-Founder & CEO @ Enkrypt AI | Math PhD, Yale
3mo
These are exciting times for Enterprise AI! I'm thrilled to announce the partnership between Enkrypt AI and MongoDB, enhancing trust and efficiency in RAG applications. Our guide details securing data ingestion pipelines from knowledge base to the MongoDB Atlas Vector Search, preventing data poisoning, indirect prompt injections and protecting sensitive information. Huge thanks to Prakul Agarwal, Gregory Maxson, Soumya Ranjan Pradhan, Ashwin Gangadhar, Satbir Singh, Jatan Loya, Prashanth Harshangi, Erin Swanson, Tanay Baswa, Divyanshu Kumar and Vaibhav Agarwal for their incredible contributions. Proud of what we've accomplished together! #EnterpriseAI #Innovation #GenerativeAI #SecureAI #RAG
We are thrilled to announce our partnership with MongoDB! By securing RAG workflows for faster production deployment, enterprises can now leverage AI technologies with confidence and improved trust. Read more here:https://lnkd.in/gCwR4yqT #AIsecurity #AI #RAG #innovation #RAGsecurity - Builders Fund reposted this
Co-Founder & CEO @ Enkrypt AI | Math PhD, Yale
4mo
Is Sensitive Data Protection separate from LLM Guardrails? As the landscape of AI evolves, so too does the complexity of data management. Traditional data leakage solutions often fall short when paired with Generative AI. Why? Here’s what you need to know: 1️⃣ Beyond Patterns: Traditional methods like pattern matching or hashing no longer suffice. Data isn't just unstructured; it varies significantly in sensitivity from one company to another. 2️⃣ User Experience vs Security: Blocking all data uploads could secure your systems, but at what cost to user experience? Isn't that counterintuitive to the very purpose of implementing Generative AI? 🌟 Enkrypt AI Guardrails: We're excited to unveil our latest innovation - the Gen AI DLP Solution, integrated into our Guardrails. This isn't just another sensitive data leakage solution. It's a comprehensive system designed to not only protect against the malicious use of AI but also safeguard your sensitive data seamlessly. Whether you are consuming 3rd-party AI solutions or building your own, you need to be thinking this. Why juggle multiple point solutions when you can have an all-encompassing guardrail that's easy to manage and deploy? We’d love to hear your thoughts and explore how we can create value together with our new solution. #GenerativeAI #DataProtection #AI #Innovation - Builders Fund reposted this
After recognition from HFS Research, we have been recognized by G2 as a High Performer in the ML & Data Science Platform category. We're dedicated to continuous excellence, ensuring our platform meets and exceeds the expectations of our community. Thank you for being part of this journey with us. - Builders Fund reposted this
Building Dosu - We're Hiring :)
4mo
Dosu turned one 🎂 It's been a year since Dosu made its first comment on a LangChain issue. Since then Dosu has: - Been installed on over 16,000 repositories, including prominent projects like Superset, Airflow, and DevLake. - Has interacted with over 75,000 users on GitHub alone and helped tens of thousands more - Has collectively saved developer tens of thousands of hours - And much, much more! What's most exciting is that even though Dosu is already delivering value to thousands of developers every month, there is still so much more to build. We can't wait to see the impact Dosu will have over the next year. This wouldn't have been possible without amazing early partners like LangChain, LlamaIndex, Quivr (YC W24), Preset , Arize AI, Astronomer, Cal.com, Inc., the Cloud Native Computing Foundation (CNCF), and many more. If you want your team to spend more time coding and less time on tedious tasks, sign up at app.dosu.dev or reach out at hi@dosu.dev - Builders Fund reposted this
Investing in AI-native startups
6mo
LLMs in production 😇 We chose this topic for the 2nd edition of MagicBall Insights, to explore the nuances and challenges with building AI products and taking them to production at scale. We have Shashank Harinath from Salesforce and Tillman Elser from Sentry (sentry.io) joining us online on 25th May at 10 PM (IST) 👨💻 Rohit Agarwal, co-founder of Portkey will be moderating this discussion and taking us through various aspects of building a production-grade AI application 😎 The session with cover topics such as performant RAG architecture & components selection, predictability & reliability, evaluation & optimization, multi agent systems etc. Tune in to get inspired by "industry giants" who are solving their AI problems. Event registration link in the comments section. Rohan Sood Aravind Putrevu Vrushank Vyas Rahul Chhabria Milin Desai Nikhil Kapur - Builders Fund reposted this
Introducing Enkrypt AI LLM Safety Leaderboard 🛡️ Thrilled to share our LLM Safety Leaderboard. This is part of our comprehensive Sentry suite, this tool empowers your enterprise to deploy Large Language Models (LLMs) with enhanced security and peace of mind. Why the LLM Safety Leaderboard? - Comprehensive Vulnerability Insights: Detailed evaluations of potential security risks, including data leakage, privacy breaches, and susceptibility to cyber-attacks. - Ethical and Compliance Risk Assessment: Tests for biases, toxicity, and compliance with ethical standards and regulatory requirements to ensure models align with enterprise values and brand integrity. Impact on Generative AI Adoption: Our leaderboard, along with our red-teaming suite, is already serving as a critical resource for enterprises, giving them full confidence in their security posture. By understanding each model's unique strengths and weaknesses, AI engineers and technology teams can make better, safer, and more informed decisions, enabling faster front-office decision-making. 🔍 Be Proactive About Your AI Security Join leading enterprises that are leveraging the LLM Safety Leaderboard to enhance technological trustworthiness and secure generative AI deployments. 🌟 What’s Next? Stay tuned for upcoming updates, including more proprietary jailbreaking methods and additional tests, ensuring greater security and compliance adaptability. https://lnkd.in/gKVHmK28 Enkrypt AI Sahil Agarwal Prashanth Harshangi #AI #CyberSecurity #GenerativeAI #EnkryptAI #LLMSafety #ResponsibleAI #RSAC2024 #AISecurity