deepsense.ai | LinkedIn (original) (raw)
Usługi i doradztwo informatyczne
Warsaw, Mazowieckie 5949 obserwujących
Accelerate your AI implementation with our experienced humans
Informacje
deepsense.ai helps companies implement AI-powered solutions, with the main focus on AI Guidance and AI Implementation Services. Our commitment and know-how have been appreciated by global clients including Nielsen, L’Oréal, Intel, Nvidia, United Nations, BNP Paribas, Santander, Hitachi, and Brainly. Wherever you are on your AI journey, we can guide you and help implement projects in Generative AI, Natural Language Processing, Computer Vision, Predictive Analytics, MLOps and Data Engineering. We also deliver training programs to support companies in building AI capabilities in-house.
Branża
Usługi i doradztwo informatyczne
Wielkość firmy
51-200 pracowników
Siedziba główna
Warsaw, Mazowieckie
Rodzaj
Spółka prywatna
Data założenia
2014
Specjalizacje
Data Science, Big Data, Machine Learning, Deep Learning, Apache Spark, Neural Networks, Artificial Intelligence, Reinforcement Learning, Data Analytics i Predictive Modeling
Lokalizacje
Pracownicy deepsense.ai
Aktualizacje
-
5949 obserwujących
3 dni Edytowane
🚀 Moving sentiment analysis in AI beyond the basics Midway through EMNLP 2024, where our Data Scientist, Iwo Naglik, is presenting groundbreaking work on Aspect-Sentiment Triplet Extraction (ASTE)—a challenging task in aspect-based sentiment analysis. ->>> What is ASTE? Where can you use it? ASTE is all about extracting (aspect phrase, opinion phrase, sentiment polarity) triples from text—an essential move for any business aiming to get more than surface-level sentiment insights. Traditional methods rely on chained classifiers, which often miss the nuances and dependencies in phrases. Iwo’s research brings a fresh take: using three transformer-inspired layers to capture dependencies across phrases and classifier decisions in one integrated approach. The result? Higher F1 performance and a model that’s prepped to handle real-world language messiness. ->>> What is the conference about? During EMNLP, we hear from experts like Percy Liang (Stanford University), Anca Dragan (UC Berkeley), and Tom Griffiths. They’re unpacking how the latest AI tech stacks can actually be applied responsibly and effectively. Their insights shape our roadmap for deploying more efficient and ethical AI solutions for our clients. NLP here is intersecting with computer vision, robotics, and beyond. This isn’t just about cool tech; it’s about finding practical crossover use cases that give us a real edge. Being here in person means connecting with top researchers and devs worldwide. It’s not just theory—it’s about hearing different angles on the same problems and finding solutions we can immediately put to work. ->>> Can’t wait to bring these insights and ASTE’s potential back home to drive what’s next for our clients and partners #NLP #MachineLearning #ASTE #SentimentAnalysis #EMNLP2024 - Add GraphRAG to Your Tech Stack! ML developers and tech leaders in AI - ready to bridge the gap between documentation and practical application? Join our latest deeptalk led by Patryk Wyżgowski, Machine Learning Engineer, as he breaks down the fundamentals of Knowledge Graphs and GraphRAG. 🔗 Watch the full video: https://lnkd.in/dmVPSVcMHere's what you'll learn: 1. Key concepts, architecture, and practical examples of GraphRAG, including implementations from Neo4j + LangChain and Microsoft's GraphRAG 2. Insights into LLM-based Knowledge Extraction—its possibilities and risks 3. A case study using transcript data as examples of unstructured text sources 🔗 Watch now: https://lnkd.in/dmVPSVcM #MachineLearning #GraphRAG #LLM #TechLeadership #AI #DataScience #MLDevelopers #DeepLearning
-
5949 obserwujących
1 tyg.
20% improvement in defect detection, reaching 96% precision 🎯 What’s the challenge? Traditional quality control in high-stakes manufacturing—especially with advanced materials—can’t keep up. Imagine detecting defects like gaps, fuzzballs, and anomalies in real-time before they ever leave the production line. Dive into the project to see how AI is transforming quality control and how we can help elevate your processes. 👇 #AI #QualityControl #Manufacturing #ComputerVision #DefectDetection #ArtificialIntelligence - deepsense.ai ponownie to opublikował(a)
CEO at deepsense.ai | Angel Investor
1 tyg.
"Don’t believe everything you get from ChatGPT" – a wise reminder from our old friend "A. Lincoln." While a dose of skepticism is healthy, there’s sometimes too much of it. Thinking your use case will never work because "there will always be some hallucinations" might be going a bit far. Never say never - especially if you’re betting against AI progress and clusters with hundreds of thousands of GPUs. There are effective methods to work around hallucinations, with RAG leading the charge. And there’s plenty of reason for optimism about what the future holds. Check out my latest newsletter, Hallucinating Reality, to dive deeper.https://lnkd.in/evijGdz4 #AI #LLMs #RAG #hallucinations
Hallucinating Reality Robert Bogucki na LinkedIn - deepsense.ai ponownie to opublikował(a)
CEO at deepsense.ai | Angel Investor
2 tyg.
If you’ve encountered AI outputs that look accurate but are more sci-fi than fact, you’re not alone. This article cuts to the core of hallucinations—AI's uncanny ability to present falsehoods as truths. We’re dissecting why these hallucinations persist, outlook for the future, and, more importantly, how to architect AI systems now that minimize them without compromising on capabilities. From RAG and RIG techniques to guardrails and in human-in-the-loop verification, this read will assess our hopes, options, and workarounds.#AI #LLMs #RAG #hallucinations
Hallucinating Reality Robert Bogucki na LinkedIn - deepsense.ai ponownie to opublikował(a)
Software Developer - Python, Graph Systems & Data Engineering.
3 tyg.
Last week I was able to attend this year's Neo4j Graphsummit Europe in London, representing https://deepsense.ai/ there. Loads of new insights, good networking & an overall exciting 2-day event ! Here are some of my thoughts afterwards: - First time I heard "explainable AI is coming and graphs will be a big part of it" was nearly 5 years ago at one of the Connected Data London meetups ... fast-forward to today and about 3/4 of the conference's talks/workshops are an the actual implementation methods of the concept (GDS-powered ML data analysis, RAGs + neo4j as vector store with graph-viz for visual debugging, or the newest kid on the block: graphRAGs) - so, not just a promise glooming on the horizon, it's actually here, folks! 👀 - "2023 -> PoCs, 2024 -> production" trend for GenAI mentioned at the keynote and later talks .. 🤔 For RAGs? Sure! We'll see if that trend applies adequately for GraphRAGs (PoCs phase is already here surely, based on the stories heard during networking and our own experience 😉). - Highly anticipating Graph Data Science being merged into the "one, seamless experience" on AuraDB, also in terms of the licences - not going to lie, albeit a "simple" thing it's still definitely some barrier at the moment, based on our conversations with clients (this was hinted at during the event so I believe no ETA on this yet). - I've been on a record* saying how big of a benefit visual debugging is when working with graph-formatted data -> it's even more so now in the GenAI context! Example for starters ? Debugging your 1536-or-more-dimensional embeddings in a vector store vs. being able to visually inspect and post-process the LLM-generated graph along the embeddings in the standard GrapRAG approaches. As you'd expect, this is well-recognised by Neo4j and it appeared a couple of times as a benefit during those 2 days. Very well indeed - an obvious concept so often disregarded in the graph data formats adoption. Now - off I go, trying to find time to go through the conference's GraphRAG workshops materials at my own pace ... :) * "Benefit 2" section for those curious: https://lnkd.in/dQ5_V3Jp(Yes, plugging a 2-year-old article might just create enough social pressure to make part II a reality, ha! :) ) -
5949 obserwujących
2 tyg.
Web Agents like WebGPT, WebVoyager, and Agent-E are transforming online automation. Gone are the days of basic bots—these agents understand, interpret, and interact with web content as we do, revolutionizing the ways we navigate, retrieve information, and handle complex online tasks. In our latest blog, we dive deep into how advancements in LLMs are powering this evolution, from understanding nuanced requests to overcoming real-world obstacles like dynamic content and complex user interactions. It’s a thrilling time, but also one full of challenges. Link: https://lnkd.in/dTFTZn_v📌 Key Highlights: 1. Beyond Basic Bots WebGPT, WebVoyager, and Agent-E don’t just scrape data; they grasp context, interpret nuances, and adapt to real-time content. 2. Technical Innovations WebGPT focuses on accurate Q&A with text-based navigation, while WebVoyager and Agent-E bring in visual interpretation and multi-agent architectures for superior performance. 3. The Real Challenges Complex interactions, multimodal integration, and continuous changes in web landscapes are pushing the boundaries of what these agents can handle. 4. A Future of Convergence Imagine an agent that can handle text, visuals, and maybe even audio or video to holistically understand web environments—this is where we’re heading. These agents are just scratching the surface, and their full potential will only be realized with more rigorous testing and a focus on robustness and security. 👉 Read our full article authored by Maksymilian Operlejn and Alicja Kotyla to see how Web Agents are poised to redefine online automation and what it will take to bring them into production-level readiness: https://lnkd.in/dTFTZn_v #LLM #WebAgents #AI #CTO #AIDevelopers #FutureOfAI