LLM Archives - AiThority https://aithority.com/tag/llm/ Artificial Intelligence | News | Insights | AiThority Tue, 13 Aug 2024 10:03:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://aithority.com/wp-content/uploads/2023/09/cropped-0-2951_aithority-logo-hd-png-download-removebg-preview-32x32.png LLM Archives - AiThority https://aithority.com/tag/llm/ 32 32 deepset Launches Studio for LLM App Development with Cloud and NVIDIA AI Integrations https://aithority.com/machine-learning/deepset-launches-studio-for-llm-app-development-with-cloud-and-nvidia-ai-integrations/ Tue, 13 Aug 2024 10:03:48 +0000 https://aithority.com/?p=575119 deepset Launches Studio for LLM App Development with Cloud and NVIDIA AI Integrations.

deepset Studio empowers AI developers to design and visualize custom AI pipelines for deployment in mission-critical business applications deepset, the mission-critical AI company, today announced an expansion of its offerings with deepset Studio, an interactive tool that empowers product, engineering and data teams to visually architect custom AI pipelines that power agentic and advanced RAG […]

The post deepset Launches Studio for LLM App Development with Cloud and NVIDIA AI Integrations appeared first on AiThority.

]]>
deepset Launches Studio for LLM App Development with Cloud and NVIDIA AI Integrations.

deepset Studio empowers AI developers to design and visualize custom AI pipelines for deployment in mission-critical business applications

deepset, the mission-critical AI company, today announced an expansion of its offerings with deepset Studio, an interactive tool that empowers product, engineering and data teams to visually architect custom AI pipelines that power agentic and advanced RAG applications. AI teams are now able to more easily build top-tier composable AI systems, and immediately deploy them in cloud and on-premises environments using deepset Cloud and NVIDIA AI Enterprise software.

“Enterprises across industries are seeking ways to effectively integrate AI into their core operations while maintaining security and scalability”

“deepset is a leader in enabling custom AI development – powering many of the world’s most trusted, high-value use cases,” said Milos Rusic, CEO and co-founder of deepset. “The addition of deepset Studio now enables developers at the thousands of companies worldwide to architect the next generation of custom LLM applications. This new tool, combined with native integrations to NVIDIA AI Enterprise, provides a robust platform for enterprise developers to safely and reliably develop mission-critical generative AI products and features.”

Pushing the boundaries of customized AI-driven business applications

deepset Studio is a drag-and-drop visual environment for building customized AI pipelines. Its intuitive interface accelerates the AI development process with a user-friendly drag-and-drop UX for AI teams to architect a wide range of LLM use cases, from RAG to agentic applications. Key benefits of deepset Studio include the ability to:

  • Design AI pipelines with a drag-and-drop visual editor that automatically validates component relationships and pipeline structure.
  • Leverage Haystack’s comprehensive library of integrations and components to create flexible and composable application architectures like RAG and agents.
  • Jumpstart the development process with proven pipeline templates, component configurations, and shareable visual representations of simple to complex AI systems.
  • Go to production faster with native cloud and on-premises deployment options for deepset Cloud and NVIDIA AI Enterprise.

Extending the power of Haystack, deepset Cloud and NVIDIA

deepset Studio is available as a free standalone tool for users of the popular Haystack open-source framework and is built into the deepset Cloud platform and integrated with NVIDIA AI Enterprise for cloud or on-premises deployments of production AI. With this,

  • Haystack users can build and visualize AI pipelines for “cloud-to-ground” environments, speeding development time and simplifying collaboration. Haystack is the leading open-source AI framework for developing production-ready applications and is the choice for thousands of developers due to its quality codebase, flexible framework, and wide library of components and integrations.
  • deepset Cloud customers gain a powerful and intuitive visual editor as a platform feature to facilitate the creation of AI pipelines in the platform. deepset Cloud provides a complete set of development tools – from data management and LLM choice to prompt configuration and evaluation – empowering AI teams at companies such as Airbus and YPulse to develop and deploy enterprise-grade applications within a secure and scalable environment.
  • NVIDIA AI Enterprise users can optimize their deployments through deepset Studio’s integration with NVIDIA NIM microservices and the NVIDIA API catalog. Users can configure NIM microservices deployments and LLM inference directly in Studio. The tool provides deployment guides for setting up NIM and Haystack pipelines on Kubernetes, streamlining deployment to any cloud or data center.

What customers are saying:

“Enterprises across industries are seeking ways to effectively integrate AI into their core operations while maintaining security and scalability,” said Anne Hecht, senior director of product marketing for enterprise software at NVIDIA. “The integration of the NVIDIA AI Enterprise software suite with deepset Studio will help simplify and accelerate the deployment of AI applications, supporting both cloud and on-premises environments.”

“deepset is our trusted partner for launching high-quality AI applications quickly, said Dan Coates, President of YPulse. “Our customers love what we’ve built with deepset. We’re excited about deepset Studio, which simplifies AI development and showcases deepset’s rapid innovation. This tool allows us to visually transform AI ideas into customized product offerings with even more speed and ease as AI applications become increasingly sophisticated.”

Reserve a spot for deepset Studio beta

  • Sign up to use deepset Studio for free, unlocking an interactive visual environment to learn, explore, and build LLM applications with Haystack.
  • Developers can accelerate AI deployments with NVIDIA NIM microservices, available for free on the NVIDIA API catalog.
  • deepset Cloud customers have access to the Studio functionality in Beta now

Also Read: AiThority Interview with Yair Amsterdam, CEO of Verbit 

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post deepset Launches Studio for LLM App Development with Cloud and NVIDIA AI Integrations appeared first on AiThority.

]]>
CalypsoAI Builds on Success of Customizable GenAI Scanners with Multiple Product Updates https://aithority.com/machine-learning/calypsoai-builds-on-success-of-customizable-genai-scanners-with-multiple-product-updates/ Wed, 07 Aug 2024 15:44:18 +0000 https://aithority.com/?p=574983 CalypsoAI Builds on Success of Customizable GenAI Scanners with Multiple Product Updates

With a model that blocks 97% of harmful prompts, has 95% decision accuracy, and identifies 92% of blocked prompts as potential threats, CalypsoAI leads the industry in mitigating AI risks CalypsoAI, the leader in AI security, unveiled today at Black Hat USA 2024 that users now have access to a suite of new, industry-leading functionalities […]

The post CalypsoAI Builds on Success of Customizable GenAI Scanners with Multiple Product Updates appeared first on AiThority.

]]>
CalypsoAI Builds on Success of Customizable GenAI Scanners with Multiple Product Updates

With a model that blocks 97% of harmful prompts, has 95% decision accuracy, and identifies 92% of blocked prompts as potential threats, CalypsoAI leads the industry in mitigating AI risks

CalypsoAI, the leader in AI security, unveiled today at Black Hat USA 2024 that users now have access to a suite of new, industry-leading functionalities allowing them to customize their generative AI (GenAI) security measures based on what makes the most sense for their business. From rapid updates that address evolving threats in real time to new integrations and partnerships, these latest enhancements build on CalypsoAI’s cutting-edge platform and continue to deliver on its mission to secure GenAI across the enterprise—regardless of industry, model, or use case—by offering the best solution on the market.

Also Read: Humanoid Robots And Their Potential Impact On the Future of Work

“The threat landscape when it comes to AI is evolving so quickly, we realized we needed to fight fire with fire—which is exactly where our customizable scanner technology comes in”

According to research from CalypsoAI and the Everest Group, data security and privacy are among CIOs’ top concerns when it comes to generative AI, with 55% revealing it as a challenge to adoption. A significant part of this concern stems from the fact that use cases across industries, companies, and departments vary drastically. Most guardrail-like defenses are built with legacy techniques or simply not enterprise-ready, limiting effectiveness and resulting in outdated protection almost instantly. These limitations pose a significant challenge for organizations seeking to deploy and adopt GenAI while ensuring AI security can adapt to constantly evolving threats.

At the RSA Conference in May, CalypsoAI introduced its customizable GenAI Scanners, solving these major security challenges and setting a new industry standard. This game-changing technology allows organizations to create their own AI-powered scanners with simple descriptions, which CalypsoAI’s advanced LLM then transforms into a sophisticated data shield. This enables companies to address specific vulnerabilities and set detailed policies that drive the highest levels of performance. With real-time threat updates and instant adaptation, CalypsoAI is the first and only solution using GenAI to secure all enterprise use cases.

Building on this innovation, CalypsoAI’s platform offers the most advanced tools to secure, audit, and monitor internal and external GenAI usage across all models, vendors, and modes. These enhancements transcend enterprise protection across GenAI deployments more than any other solution, including:

Also Read: Humanoid Robots And Their Potential Impact On the Future of Work

  • Out-of-the-Box Scanners: While many companies want to customize scanners to suit their business needs, others want the ease of flipping a switch to power protection. To meet this need, CalypsoAI offers out-of-the-box scanners that address particular business use cases and verticals, such as PII and source code vulnerability. These pre-packaged scanners round out the existing suite of fully bespoke GenAI Scanners, enabling organizations of all sizes and industries to have the security capabilities at their disposal that make the most sense for their business.
  • Dynamic, Real-Time Threat Updates: The CalypsoAI Platform now has the ability to push out updates and react in real time to tackle new and evolving threats, such as zero-day attacks. This is made possible through CalypsoAI’s proprietary process which powers its advanced model to adapt seamlessly to new threats, ensuring a company’s security remains up-to-date and resilient against ever-changing cyber challenges.
  • New Model Partners and Integrations: CalypsoAI recently announced a partnership with IBM watsonx to offer a comprehensive defense against modern cyber threats—blending IBM’s powerful AI models with CalypsoAI’s advanced threat detection mechanisms and proactive defense strategies.
  • The CalypsoAI Security Community: CalypsoAI is creating the ability for users to share successful custom-built scanners with others—from partners in their organization’s network, colleagues, or those they interact with on channels like X or LinkedIn—to leverage to their advantage. Fundamental to the community is the desire to strengthen knowledge-sharing among security professionals and increase awareness of best practices in the effort to secure GenAI use internally and externally.

“The threat landscape when it comes to AI is evolving so quickly, we realized we needed to fight fire with fire—which is exactly where our customizable scanner technology comes in,” said Jimmy White, CTO of CalypsoAI. “At CalypsoAI we want to empower your business to proactively evolve and respond to threats before they even emerge, ensuring instantaneous, comprehensive protection and resilience. The only way to do that is to fully embrace generative AI as a key pillar and core differentiator to our platform—and the latest features ensure that we’re not only doing that at scale but giving users the ability to share those protections and learn from each other.”

CalypsoAI will be demoing the new features and its full security and enablement platform at Black Hat USA 2024. Stop by the CalypsoAI booth #4310 to see the platform come to life in an interactive crime-solving game called, ‘Behind the Mask.’ And don’t miss the company’s cocktail party on Wednesday, August 7 from 6-8 PM PDT at the Hops & Hackers Happy Hour.

Don’t miss this out: More than 500 AI Models Run Optimized on Intel Core Ultra Processors

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post CalypsoAI Builds on Success of Customizable GenAI Scanners with Multiple Product Updates appeared first on AiThority.

]]>
AiThority Interview with Kunal Purohit, President – Next Gen Services, Tech Mahindra https://aithority.com/machine-learning/aithority-interview-with-kunal-purohit-president-next-gen-services-tech-mahindra/ Wed, 07 Aug 2024 07:38:22 +0000 https://aithority.com/?p=574847 AiThority Interview with Kunal Purohit, President – Next Gen Services, Tech Mahindra

The post AiThority Interview with Kunal Purohit, President – Next Gen Services, Tech Mahindra appeared first on AiThority.

]]>
AiThority Interview with Kunal Purohit, President – Next Gen Services, Tech Mahindra

Kunal Purohit, President – Next Gen Services, Tech Mahindra discusses several key initiatives and innovations at Tech Mahindra’s Makers Lab. He talks about Project Indus, an AI-driven enterprise platform that optimizes operational efficiency and focuses on the Hindi language and its dialects through a user-contribution portal in the following Q&A:

———-

Tech Mahindra’s Makers Lab is known for fostering innovation. Can you talk about some of the most impactful and disruptive solutions from Makers Lab recently?

At Tech Mahindra’s Makers Lab, our mission is to drive purpose-driven and human-centered innovation. One of our most impactful recent initiatives is Project Indus. Born out of a desire to revolutionize the future of work, Project Indus leverages the power of AI to create a seamless and intelligent enterprise platform. This platform optimizes operational efficiency and enhances decision-making processes by providing real-time insights and predictive analytics. Project Indus utilizes an innovative ‘GenAI in a box’ framework and simplifies the deployment of advanced AI models, making it easier for enterprises to integrate and scale AI applications. The initial phase of the Indus LLM targets the Hindi language and its 37+ dialects. The project includes a portal called projectindus.in, where users can contribute linguistic data.

Also Read: Three Ways Generative AI Can Accelerate Knowledge Transfer Across An Organization

Our Makers Lab played a pivotal role in the conception and development of Project Indus. By fostering a collaborative environment, we combined the ingenuity of our diverse talent pool with cutting-edge technologies. The project aims to address real-world challenges faced by businesses today, offering scalable and adaptable solutions that drive growth and sustainability. Through Project Indus, Makers Lab exemplifies how innovation can be both disruptive and deeply beneficial, paving the way for smarter, more efficient enterprises.

What is your perspective on the impact of Generative AI (GenAI) on the workplace, particularly in terms of operational effectiveness and employee productivity?

GenAI is fundamentally transforming the workplace landscape, significantly enhancing operational effectiveness and employee productivity. At Tech Mahindra, we perceive GenAI as a catalyst for creating smarter, more efficient processes that drive innovation and deliver value at unprecedented speeds. We’ve introduced GenAI-driven pair programming to support our associates throughout the software development life cycle and have deployed GenAI-empowered co-pilots for boosting personal productivity.

By automating routine tasks, GenAI allows employees to focus on more strategic, creative endeavors, thereby boosting productivity and job satisfaction.

Our approach is holistic, focusing on empowering our workforce with advanced AI tools to foster sustainable growth and innovation. In an ever-evolving market, Tech Mahindra remains dedicated to creating a dynamic, agile workplace where technology and human ingenuity converge to deliver superior outcomes.

Also Read: The Role of AI and Machine Learning in Streaming Technology

What major ethical challenges do you foresee with integrating AI and quantum computing in the industry, and how is Tech Mahindra addressing them?

The integration of AI and quantum computing promises unparalleled advancements but also presents certain ethical challenges. One major concern is data privacy. Quantum computing’s immense processing power could potentially break current encryption methods, making sensitive data vulnerable. Additionally, the bias in AI algorithms can be magnified by the capabilities of quantum computing, leading to unintended and possibly discriminatory outcomes. At Tech Mahindra, we are proactively addressing these challenges through our Makers Lab initiatives. We are pioneering the development of quantum-safe cryptography to safeguard data in a post-quantum world. Moreover, our AI ethics framework emphasizes transparency, accountability, and fairness.

We are investing in interdisciplinary teams that include ethicists, technologists, and legal experts to ensure our innovations are aligned with ethical standards. By fostering a culture of ethical foresight and continuous learning, Tech Mahindra aims to lead responsibly in this transformative era, ensuring technology serves humanity’s best interests.

What significant AI tools and innovations has Makers Lab developed over the past few years?

At Makers Lab, our mission is to foster innovation by bridging the gap between imagination and reality. Over the past few years, we have harnessed the power of AI to create tools that push the boundaries of technology and deliver real-world impact. One of our standout innovations is the BHAML (Bharat Markup Language) solution that enables coding in native languages. Another remarkable creation is Enterprise Intelligence I/O (Entellio), a futuristic enterprise-grade on-premises chatbot powered by generative and discriminative AI. Some other innovations include Atmanirbhar Krishi, a super app for farmers, providing valuable agriculture related consolidated, curated information, and Panchang Intelligence, an ancient Indian almanac-based rainfall prediction solution.

Our commitment to quantum computing has also been recognized, with Avasant considering us a leading service provider in this cutting-edge field. Furthermore, our collaborative R&D efforts have earned us a place as a case study by the World Economic Forum, showcasing the impact of our innovative solutions. Our dedication to innovation has been acknowledged globally, with accolades such as the ISG Digital Case Study Award for Banking (UBI) in Metaverse 2023, Most Innovative Company 2021 and the Most Innovative Leader by the World Innovation Congress. Additionally, our support for start-ups was honored with the MindtheGap award for mentoring. At Makers Lab, we continue to drive technological advancements, making significant strides in the AI landscape.

Also Read: Want to Beat FOIA Backlogs? Embrace AI

What emerging trends in AI and computing are you most excited about, and how is Makers Lab positioning itself to capitalize on them?

At Makers Lab, we are on the cusp of a revolution in AI and computing, eagerly embracing trends like GenAI, Quantum Computing, and Neuromorphic Engineering. The ability of GenAI to create content that mimics human creativity is reshaping industries from entertainment to healthcare. Quantum computing promises to solve problems beyond the reach of classical computers, potentially transforming everything from cryptography to complex system simulations. Neuromorphic engineering, with its brain-inspired architectures, offers a leap in efficiency and capability for AI systems.

Makers Lab is strategically positioned at the forefront of these innovations. Our multidisciplinary teams are developing quantum algorithms to accelerate machine learning, exploring the potential of neuromorphic chips for more efficient AI, and creating generative AI models that push the boundaries of creativity. By fostering a collaborative ecosystem, we are turning these emerging trends into practical solutions, ensuring that Tech Mahindra remains a leader in the next wave of technological advancement.

How do you ensure continuous learning and development within your team at Makers Lab?

We foster a culture of continuous learning by integrating experiential learning with a collaborative spirit. Our team engages in hands-on projects, exploring emerging technologies like AI, quantum computing, and blockchain. Regular knowledge-sharing sessions ensure that our learning ecosystem remains vibrant. For instance, we recently collaborated with a leading university on quantum algorithms, enabling our team to learn from top-tier researchers. In celebration of World Quantum Day 2024, Tech Mahindra and IQM Quantum Computers partnered to raise awareness, demonstrate, and promote the transformative power, and increase an understanding of quantum science and technology.

By encouraging curiosity and innovation, we stay ahead of technological trends and empower our team to drive groundbreaking solutions. This dynamic approach to learning transforms challenges into opportunities, fueling our mission to create a future-ready workforce.

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

I head TechM’s Digital & Analytics Capability Solutions Units (CSUs) globally. These units help Enterprises convert the promise of Digital & AI into tangible Business outcomes while keeping the enterprise secure from cyber attacks & vulnerabilities. Put together, these CSUs have 10000+ practitioners helping customers from conceiving new ideas and solutions, Prototyping those solutions and then scaling them across the enterprise. The CSUs PnL of 1 bn $ is amongst the fastest growing segment in the company.

I also head TechM’s Wave4 business- this is where we create our own SaaS businesses by incubating startups / providing initial seed round funding. so far we have launched 6 such start-ups that operate independently.

I have a total experience of 21 years which is distributed equitably over roles in the Corporate Office and the Field. in my most recent role before joining Tech M, I was leading HCL’s Digital Business and Practice in Europe and was based out of the UK. I have also spent considerable amount of time leading HCL’s Corporate Strategy office and working with the CEO & the Board and enabling strategic decisions around Organic and Inorganic growth of the company. Over the last 15 years, HCL has been consistently growing faster than the Industry, increasing its market-share and value for the stakeholders. This balanced corporate and field experience gives me the right mindset and aptitude to scale not just strategic business units but also companies that are looking at Digital to transform their business model.

I have started multiple business lines for HCL ( and now doing that for Tech M) that have scaled to become successful business units. I was one of the founding team members when HCL Tech started doing Software Services Business in India in early 2000s. I started the Digital Consulting business for HCL in Europe and APAC by infusing new Digital capability & talent into an Independent Digital BU of HCL called BEYONDigital. I love to learn from start-ups and believe that small teams can make big impact.

Though I worked only one year at GE Healthcare, it gave me a great foundation early on to put customer at the heart of everything I do. It also helped me understand the core values of Integrity, going big in chosen markets and focusing on core strengths to push ahead.

I am highly result oriented in my work style. I enable my teams to create desired outcomes and also enjoy the journey along the way. I lead a high performing diverse team across the globe and am proud to be working alongside them. I love running, traveling and reading.

Tech Mahindra offers technology consulting and digital solutions to global enterprises across industries, enabling transformative scale at unparalleled speed. With 145,000+ professionals across 90+ countries helping 1100+ clients, TechM provides a full spectrum of services including consulting, information technology, enterprise applications, business process services, engineering services, network services, customer experience & design services, AI & analytics, and cloud & infrastructure services. It is the first Indian company in the world to have been awarded the Sustainable Markets Initiative’s Terra Carta Seal, in recognition of actively leading the charge to create a climate and nature-positive future.

Tech Mahindra (NSE: TECHM) is part of the Mahindra Group, founded in 1945, one of the largest and most admired multinational federations of companies.

The post AiThority Interview with Kunal Purohit, President – Next Gen Services, Tech Mahindra appeared first on AiThority.

]]>
Improving LLM Accuracy with Third-party Oracles https://aithority.com/machine-learning/improving-llm-accuracy-with-third-party-oracles/ Mon, 05 Aug 2024 08:11:14 +0000 https://aithority.com/?p=574729 Improving-LLM-Accuracy-with-Third-party-Oracles

In the current frenzy over artificial intelligence, companies are pouring unprecedented resources into generative AI roadmaps. Yet, the rapid development of solutions incorporating Large Language Models (LLMs) has led to a critical juncture. Despite the simplicity and power of using language as a user interface, LLMs are often ungrounded and prone to inaccuracies, which can […]

The post Improving LLM Accuracy with Third-party Oracles appeared first on AiThority.

]]>
Improving-LLM-Accuracy-with-Third-party-Oracles

In the current frenzy over artificial intelligence, companies are pouring unprecedented resources into generative AI roadmaps. Yet, the rapid development of solutions incorporating Large Language Models (LLMs) has led to a critical juncture. Despite the simplicity and power of using language as a user interface, LLMs are often ungrounded and prone to inaccuracies, which can undermine their commercial viability. To truly harness the potential of these models and drive revenue growth, integrating them with third-party oracles of structured data and inference mechanisms is imperative.

LLMs like OpenAI’s GPT series have captivated the world with their ability to generate coherent, contextually relevant text. Their versatility spans drafting emails, writing code, creating content, and providing customer support. However, these foundational models, relying on pattern recognition and statistical correlations, often lack domain-specific expertise, up-to-date user preferences, and real-time accuracy. This generates outputs that can be factually incorrect, biased, or contextually inappropriate—significant pitfalls for brands aiming for reliable and precise LLMs that enrich the customer experience.

Also Read: The Risks Threatening Employee Data in an AI-Driven World

The Role of Third-Party Oracles

Enter third-party oracles. These intermediaries provide validated, structured data and inferential capabilities, serving as a bridge between the real world and LLMs. Structured data is organized, searchable, and analyzable, and integrating it from third-party oracles can dramatically enhance the precision of LLM outputs, unlocking meaningful value for users.

Consider an LLM used for entertainment recommendations. When trained on generalized datasets, it might produce plausible suggestions, but these outputs will lack the domain specificity needed to feel personalized for users. Instead, integrating LLMs with oracles that provide real-time data on movie trends, user preferences, and cultural shifts transforms this engine, making it far more precise and personalized.

Imagine a music streaming app that has integrated an LLM to expand search functionality. When connected to a third-party oracle, this LLM could be imbued with data on current listening trends, user preferences, and genre popularity. This integration allows the app to offer personalized and up-to-date music recommendations, closely aligning with user tastes and increasing engagement.

Inference mechanisms, which apply logical reasoning to data, further enhance LLM capabilities. LLMs offer tremendous potential value for marketers crafting eye-catching campaigns. Without grounding these LLMs in user behavior patterns and purchase history, however, their outputs can be incredibly generic. By integrating an oracle, these LLMs can support highly targeted campaigns that predict what products a user might be interested in, and the optimal timing and messaging, significantly increasing conversion rates and driving sales.

Driving Revenue Growth Through Enhanced Capabilities

The commercial potential of LLMs is vast, and their integration with third-party oracles can unlock new revenue streams and enhance existing ones. By using LLMs augmented with third-party oracles, businesses can offer superior products and services. A company specializing in live experiences, for instance, can use an LLM integrated with real-time event data oracles to provide users with up-to-date recommendations for concerts, exhibitions, and local events. This not only enhances user experience but also attracts more attendees, driving sales.

Customer support is another critical area where LLMs are making a significant impact. Integrating customer support LLMs with structured data from product databases and knowledge bases enhances their ability to resolve queries accurately and efficiently. For a platform offering personalized recommendations, integrating with an oracle providing detailed product specifications, user reviews, and usage statistics can lead to higher customer satisfaction, reduced support costs, and increased customer loyalty—all contributing to revenue growth.

Internal business departments can also use LLMs integrated with data analytics oracles to more easily extract insights and make informed decisions. This could mean using LLMs combined with user engagement and feedback data to continually optimize the customer experience. By understanding which products and services resonate most and why, businesses can fine-tune their offerings, enhancing satisfaction and increasing engagement and retention rates.

In entertainment and lifestyle industries, staying attuned to regulatory requirements and managing risks is crucial. LLMs integrated with regulatory oracles help businesses stay compliant by providing accurate and timely information on industry regulations and standards. This avoids costly fines and legal issues and builds trust with users and stakeholders, ultimately contributing to revenue growth.

Also Read: Essential Steps for Intelligent Document Processing in Clinical Trials

Avoiding Pitfalls with Third-Party Oracles

Despite their potential, LLMs present several risks and challenges that can hinder their commercial viability. Integrating third-party oracles helps mitigate these pitfalls. One primary concern with LLMs is their tendency to generate incorrect or misleading information. Relying on structured data from verified third-party oracles enhances the accuracy and reliability of LLM outputs. For example, integrating LLMs with entertainment industry databases ensures that recommendations are based on current, verified information.

Bias and fairness are critical considerations. LLMs trained on vast datasets can inadvertently learn and perpetuate biases present in the data. Third-party oracles provide balanced and unbiased data, reducing the risk of biased outputs. Additionally, oracles equipped with fairness and ethical inference mechanisms help ensure that LLM outputs are fair and ethically sound.

Also Read: AI Inspired Series by AiThority.com: Featuring Bradley Jenkins, Intel’s EMEA lead for AI PC & ISV strategies

Data security and privacy are paramount, especially in industries handling sensitive information like personalized user data. Third-party oracles offer secure data channels and compliance with privacy regulations, ensuring that integrating LLMs does not compromise data security and privacy. This is vital for maintaining user trust and avoiding legal repercussions.

LLMs can be computationally intensive, and their performance can degrade with increasing task complexity. Third-party oracles can offload some computational burdens by providing pre-processed and structured data, enhancing the scalability and performance of LLMs. This allows businesses to deploy LLMs at scale without compromising performance.

Looking Towards Commercialization

The integration of LLMs with grounding, third-party oracles represents a significant step toward enhancing their commercial viability. By using structured data and sophisticated inference mechanisms, businesses can unlock new revenue streams, improve customer satisfaction, and make better data-driven decisions. Moreover, this integration helps mitigate inherent risks and challenges associated with LLMs, such as inaccuracies, biases, and security concerns.

In the dynamic landscape of artificial intelligence, the synergy between LLMs and third-party oracles holds the promise of creating powerful, reliable, and commercially successful AI solutions. As businesses continue to explore and adopt these integrations, the future of LLMs looks increasingly bright, marked by enhanced capabilities and sustained revenue growth. By embracing this integration, companies can stay ahead in the competitive market and set new standards for innovation and excellence in AI-driven personalized experiences.

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post Improving LLM Accuracy with Third-party Oracles appeared first on AiThority.

]]>
Protect AI Acquires SydeLabs to Red Team Large Language Models https://aithority.com/machine-learning/generative-ai/protect-ai-acquires-sydelabs-to-red-team-large-language-models/ Thu, 01 Aug 2024 06:28:51 +0000 https://aithority.com/?p=574662 Protect AI Acquires SydeLabs to Red Team Large Language Models

SydeLabs’ SydeBox extends Protect AI’s AI-Security Posture Management platform with advanced cyber attack testing for LLMs Protect AI, a leader in AI security, announced the acquisition of SydeLabs, which specializes in the automated attack simulation (red teaming) of generative AI (GenAI) systems. This strategic acquisition enhances the Protect AI platform’s ability to test and improve […]

The post Protect AI Acquires SydeLabs to Red Team Large Language Models appeared first on AiThority.

]]>
Protect AI Acquires SydeLabs to Red Team Large Language Models

SydeLabs’ SydeBox extends Protect AI’s AI-Security Posture Management platform with advanced cyber attack testing for LLMs

Protect AI, a leader in AI security, announced the acquisition of SydeLabs, which specializes in the automated attack simulation (red teaming) of generative AI (GenAI) systems. This strategic acquisition enhances the Protect AI platform’s ability to test and improve LLM security and extends the company’s lead as the only provider of end-to-end AI security solutions.

“We couldn’t be more excited about joining the Protect AI mission and the prospect of what we can achieve in terms of helping companies of all sizes adopt and deploy more secure LLMs and AI applications.”

SydeLabs: A Leader in AI Red Teaming

Generative AI and LLM adoption are revolutionizing industries. LLMs are being integrated into critical end user applications such as customer service, finance and healthcare. However the complexity and scale of the technology has exacerbated security concerns that traditional application security processes simply can not keep up with or address effectively.

SydeLabs was founded less than a year ago by former product and engineering leads from Google and MPL, and has quickly established itself as a pioneer in the field of AI security. Based in Bangalore, India, SydeLabs has developed SydeBox, a cutting-edge product designed to provide comprehensive vulnerability assessments for GenAI systems. The talented team from SydeLabs will join Protect AI where they will continue to add local talent in Bangalore to complement our Seattle and Berlin based teams.

“Protect AI is continuously looking to add products to our AI security posture management platform that help our customers build a safer AI-powered world,” said Ian Swanson, CEO of Protect AI. “The acquisition of SydeLabs extends the Protect AI platform with unmatched red teaming capabilities and immediately provides our customers with the ability to stress test, benchmark and harden their large language models against security risks.”

Also Read: AI Inspired Series by AiThority.com: Featuring Bradley Jenkins, Intel’s EMEA lead for AI PC & ISV strategies

SydeBox will be integrated into the Protect AI Platform and rebranded as Protect AI Recon. Recon identifies potential vulnerabilities in LLMs, ensuring enterprises can deploy AI applications with confidence. Key features of Recon include no-code integration, model-agnostic scanning, and detailed threat profiling across multiple categories. Recon uses both an attack library and LLM agent based solution for red teaming and evaluating the security and safety of GenAI systems. Protect AI Recon aligns perfectly with the growing demand for robust AI security solutions, driven by formal guidance from NIST, MITRE, OWASP and CISA, as well as mandates like the Executive Order on AI Safety and Security and the EU AI Act.

“The combination of SydeLabs’ SydeBox and Protect AI’s platform provides customers a comprehensive defense-in-depth solution for building, managing, testing, deploying and monitoring LLMs,” said Ruchir Patwa, co-founder of SydeLabs. “We couldn’t be more excited about joining the Protect AI mission and the prospect of what we can achieve in terms of helping companies of all sizes adopt and deploy more secure LLMs and AI applications.”

Also Read: Extreme Networks and Intel Join Forces to Drive AI-Centric Product Innovation

The new Recon product will enable Protect AI to meet growing customer demand for robust AI security solutions. Customers will benefit from detailed threat profiling across jailbreaks, prompt injection attacks, input manipulations and other attack vectors, which are crucial for maintaining the integrity and security of AI systems. Recon covers six of the OWASP Top 10 for LLM applications.

“Recon, formally SydeBox, has enabled us to identify and fix security blindspots before deploying our GenAI solutions to ensure we are building the most secure and safe LLM powered applications, and that products we serve our customers are free from any security or safety loopholes,” said Kiran Darisi, CTO and cofounder, AtomicWork.

This acquisition and new product, Recon, further enhances Protect AI’s position as the leader in the AI security market and AI Security Posture Management (AI-SPM) solutions, differentiating it from competitors and solidifying its market presence. More specifically when used alongside Layer, Protect AI’s LLM observability and monitoring solution, Recon enables organizations to harden the implementation of LLMs against the spectrum of emerging security concerns associated with GenAI usage. Partners and stakeholders will also gain from the enhanced security capabilities, ensuring that the entire AI ecosystem is better protected against potential threats.

Also Read: More than 500 AI Models Run Optimized on Intel Core Ultra Processors

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post Protect AI Acquires SydeLabs to Red Team Large Language Models appeared first on AiThority.

]]>
Applitools Launches Autonomous 2.0 to Boost AI Testing Speed and Reliability https://aithority.com/machine-learning/applitools-launches-autonomous-2-0-to-boost-ai-testing-speed-and-reliability/ Wed, 31 Jul 2024 13:34:34 +0000 https://aithority.com/?p=574609 Applitools Launches Autonomous 2.0 to Boost AI Testing Speed and Reliability

Applitools Autonomous 2.0 brings world’s most advanced no-code test authoring experience to almost anyone from QA teams to manual testers, product owners and designers Applitools, the industry leader in AI-powered end-to-end testing, today announced the latest update of the Applitools Intelligent Testing Platform: Autonomous 2.0. This cutting-edge solution enables QA teams, manual testers, product owners […]

The post Applitools Launches Autonomous 2.0 to Boost AI Testing Speed and Reliability appeared first on AiThority.

]]>
Applitools Launches Autonomous 2.0 to Boost AI Testing Speed and Reliability

Applitools Autonomous 2.0 brings world’s most advanced no-code test authoring experience to almost anyone from QA teams to manual testers, product owners and designers

Applitools, the industry leader in AI-powered end-to-end testing, today announced the latest update of the Applitools Intelligent Testing Platform: Autonomous 2.0. This cutting-edge solution enables QA teams, manual testers, product owners and designers to achieve unparalleled efficiency and reliability in the testing processes with the most flexible authoring experience ranging from fully autonomous to AI-assisted to no-code recording. Supporting teams can expand their existing code-based tests and leverage AI to quickly create new tests to boost test coverage.

Also Read: Extreme Networks and Intel Join Forces to Drive AI-Centric Product Innovation

Get started with Applitools Autonomous today

Applitools is revolutionizing the testing landscape with the Applitools Intelligent Testing Platform.  Applitools Autonomous brings the power of the Intelligent Testing Platform to everyone, from QA professionals to product teams and designers, to efficiently build, maintain and execute functional and visual tests in minutes across all environments, browsers and devices.

With Applitools Autonomous 2.0, these teams can easily manage complex testing scenarios and ensure their applications are functionally and visually perfect across various platforms and devices. The advanced algorithms and user-friendly interface make it easier than ever to streamline testing workflows, identify issues promptly, and deliver a superior user experience.

Applitools Autonomous 2.0 introduces AI-assisted interactive custom flow authoring. This allows users to debug tests in real-time with an interactive browser, combining writing steps in plain English and recording them interactively. This approach speeds up authoring and enhances tests with an auto-correcting LLM, making the process more efficient and reliable.

This update expands capabilities for complete functional testing, including Visual AI assertions, non-visual assertions, and API calls. Users can now add comprehensive non-visual assertions by simply describing test steps in plain English. Teams can capture and manipulate dynamic data with variables, perform intricate data validations, automate complex workflows, initiate HTTP requests, and much more. This versatility allows for more thorough and precise testing.

Results analysis and test maintenance have been enhanced in this latest update. UX improvements in Autonomous 2.0 span from adding high-level analysis of complete plan runs to detailed breakdowns of individual test steps, ensuring users can dive into every detail with ease. And automatic adjustments of baseline regions following application changes make sure tests stay accurate and relevant even as software evolves.

Also Read: More than 500 AI Models Run Optimized on Intel Core Ultra Processors

“We are excited to introduce Applitools Autonomous 2.0, a revolutionary step forward in AI-powered end-to-end testing,” said Alex Berry, CEO of Applitools. “This innovation brings unmatched efficiency and reliability, empowering anyone in an organization to dramatically increase test coverage to deliver flawless applications on any browser, device, or screen size. With an unmatched level of AI-infused test creation, execution, and maintenance, Applitools Autonomous 2.0 significantly reduces testing time and increases confidence in delivering flawless digital experiences for our customers. Our mission is to provide intelligent, cutting-edge testing solutions, revolutionizing the way all applications are developed, tested and delivered.”

The latest features in Applitools Autonomous 2.0 include: 

  • Interactive Test Authoring – Crafting tests has never been easier. Record test steps in real time or write them in plain English—no coding or element locating skills required. The intuitive interface makes test authoring accessible to everyone, and more stable than ever before.
  • Auto-Correcting LLM – Advanced auto-correcting LLM ensures automatic syntax and grammar correction, along with step disambiguation, keeping  tests clear and concise.
  • Functional Data-driven Testing – Effortlessly perform data-driven and dynamic data testing using variables, parameters, and non-visual assertions, streamlining the testing workflow.
  • API Testing – Initiate HTTP requests with custom headers and cookies, and verify responses or use them in subsequent test steps, enhancing the depth and breadth of a testing strategy.

Also Read: AI Inspired Series by AiThority.com: Featuring Bradley Jenkins, Intel’s EMEA lead for AI PC & ISV strategies

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post Applitools Launches Autonomous 2.0 to Boost AI Testing Speed and Reliability appeared first on AiThority.

]]>
The Only Extensive Guide On LLM Monitoring You Will Ever Need https://aithority.com/machine-learning/the-only-extensive-guide-on-llm-monitoring-you-will-ever-need/ Wed, 31 Jul 2024 12:09:34 +0000 https://aithority.com/?p=574299 The Only Extensive Guide On LLM Monitoring You Will Ever Need

The next decade is marked by advancements in AI not just in terms of functionality and use cases but accountability and transparency as well. We are fast moving towards the age of XAI or Explainable AI, where we hold AI models accountable for the decisions they make. When rationality becomes the fulcrum of AI functioning, […]

The post The Only Extensive Guide On LLM Monitoring You Will Ever Need appeared first on AiThority.

]]>
The Only Extensive Guide On LLM Monitoring You Will Ever Need

The next decade is marked by advancements in AI not just in terms of functionality and use cases but accountability and transparency as well. We are fast moving towards the age of XAI or Explainable AI, where we hold AI models accountable for the decisions they make.

When rationality becomes the fulcrum of AI functioning, consistent observation of LLMs becomes inevitable. With each user prompt being different from the other, it’s a perpetual learning process for LLMs. As enterprises rolling out such models, it’s on us to ensure they are perennially relevant, fair, and precise.

This is taken care of by the process called LLM monitoring. Similar to how we demystified LLM evaluation in our previous blog, we will extensively explore what LLM model monitoring is all about, the use cases, its importance, and more.

Read: AI in Content Creation: Top 25 AI Tools

Let’s get started.

LLM Monitoring: What Is It?

Like the name suggests, it is the systematic process of tracking the performance, effectiveness, stability, reliability, and other critical aspects of functionality through distinct tools, frameworks, and methodologies. There are diverse metrics LLMs are monitored on and the weightage of each metric depends on the domain or purpose it is deployed in.

For instance, the monitoring metrics for a model deployed in healthcare is different from that of the one deployed in a CRM.

In simple terms, LLM monitoring involves the tracking of:

  • How accurate its responses are in terms of relevance, factualness and precision
  • How long does a model take to generate a response
  • Any innate bias or patterns of it in its responses
  • How well does a model understand different languages, tonalities, and prompts
  • Does it provide contextually relevant responses; like identifying a sarcastic prompt and more

How Beneficial Is LLM Monitoring When You Already Have LLM Monitoring?

One of the most common questions in this space is whether you actually need to constantly monitor your LLMs while you have evaluated them before launching.

The simplest answer is a resounding yes.

LLM evaluation only ensures adequate and competitive functionality of your models but its relevance in its application only gets strengthened by consistent fine-tuning stemming from monitoring. Apart from performance optimization, there are several compelling reasons why your models need to be monitored such as:

  • Hallucinating models, where they sometimes go berserk and present irrelevant and misleading responses in different tangents from the prompt presented
  • Hacks and prompt injections that involve the feeding of malicious prompts that can lead to the LLM generate deceptive and harmful outputs
  • Training data extraction or fetching sensitive data through specific prompts adept at bypassing common LLM sensibility and discretion and more

If you observe, a live model is prone to innumerable risks and adversities that demand consistent observation, tackling, and mitigation. This is exactly why LLM model monitoring becomes inevitable.

Understanding The Difference Between LLM Monitoring And LLM Observability

LLM monitoring and observability are two commonly misinterpreted terms and rightfully so as monitoring a model loosely translates to observing them for errors and feedback. However, when you explore in depth, the differences are stark and distinct.

From the breakdown so far, we know that LLM monitoring is the process comprising tools and methods to track LLM performance. A step further to this is LLM observability. While the former answers the how, observability answers the why.

Let’s explore this a bit further.

Read: Role of AI in Cybersecurity: Protecting Digital Assets From Cybercrime

What It Does

This process offers developers and stakeholders a deeper understanding of a model’s behavior. This is more diagnostic in nature that provides holistic prescriptive insights on the functioning of a model.

LLM observability collects a wide spectrum of data from metrics, traces, logs and more to understand issues and resolve them. For instance, if LLM monitoring gives insights on whether a model is facing issues in latency, LLM observability retrieves information on why it is happening and how it can be fixed.

In a way, LLM observability is a subset of model monitoring that solves for a greater challenge.

An Extensive LLM Metrics Monitoring Cheatsheet

Quality Relevance Sentiment Security Other Significant Metrics
Factual accuracy User feedback Sentiment scoring Intrusion detection systems Error rate
Coherence Comparison Bias detection Vulnerability patching Throughput
Perplexity Sentiment analysis Toxicity detection Access control monitoring Model health
Contextual relevance Relevance scoring Token efficiency
Response completeness Drift

LLM Monitoring: Best Practices

There are ample ways issues can be mitigated through standardized practices, specifically when monitoring LLMs. Let’s look at some of the simplest and common practices.

Read: AI In Marketing: Why GenAI Should Be in All 2024 Marketing Plans?

Data Cleaning

When training your models, ensure you sanitize your training data so sensitive information that can be identifiable is removed. One of the advantages of sourcing data from experts like Shaip is that data is sanitized to ensure optimum privacy and security. This only adds to airtight compliance of mandates specific to domains as well.

Leverage Security Tools

Diverse security tools are available that specialize in protecting AI systems and LLMs. You can harness the potential of such tools to detect anomalies and mitigate issues.

2-Factor Authentication For Sensitive Actions

At times, LLMs are pushed to take some critical actions that may linger in the gray areas of being problematic. To avoid lawsuits or legal consequences, you can add a two-step authentication system, where the model warns users of their actions and asks for a confirmation if they intend to go ahead.

Containing LLM Actions

When developing, you can also limit the actions your models can perform so they don’t trigger unintended consequences. This could be validating input and output, limiting revealing information to 3rd party databases and more.

One of the best ways to stay ahead of concerns is staying abreast of latest advancements and developments in the LLM space. This is specifically critical with respect to cybersecurity. The wider your understanding of the subject, the more metrics and techniques you can come up with to monitor your models.

We believe this was a kickstarter guide in helping you grapple the complexities of LLM model monitoring and we are sure you will take it forward from here on the best strategies to track, safeguard, and optimize your AI systems and models.

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post The Only Extensive Guide On LLM Monitoring You Will Ever Need appeared first on AiThority.

]]>
Lakera Raises $20 Million Series A to Secure Generative AI Applications https://aithority.com/machine-learning/lakera-raises-20-million-series-a-to-secure-generative-ai-applications/ Thu, 25 Jul 2024 07:01:22 +0000 https://aithority.com/?p=574323 Lakera Raises $20 Million Series A to Secure Generative AI Applications

Gandalf, the world’s largest AI red team, was created by Lakera and trains AI to Secure AI Lakera, the world’s leading real-time Generative AI (GenAI) Security company, has raised $20 million in a Series A funding round. Led by European VC Atomico, with participation from Citi Ventures, Dropbox Ventures, and existing investors including redalpine, this investment brings […]

The post Lakera Raises $20 Million Series A to Secure Generative AI Applications appeared first on AiThority.

]]>
Lakera Raises $20 Million Series A to Secure Generative AI Applications

Lakera Launches to Activate the World's Most Advanced

Gandalf, the world’s largest AI red team, was created by Lakera and trains AI to Secure AI

Lakera, the world’s leading real-time Generative AI (GenAI) Security company, has raised $20 million in a Series A funding round. Led by European VC Atomico, with participation from Citi Ventures, Dropbox Ventures, and existing investors including redalpine, this investment brings Lakera’s total funding to $30 million. This funding positions Lakera at the forefront of the global economy’s race to secure GenAI applications. As part of this round, Atomico Partner Sasha Vidiborskiy will join Lakera’s board.

Also Read: Proactive Ways to Skill Up for AI

It’s been predicted that by 2026, 80% of enterprises will have used GenAI or GenAI-enabled applications in production environments, up from less than 5% in 2023. As businesses worldwide scramble to harness the power of GenAI without exposing themselves to AI-specific risks, the demand for Lakera’s platform is expected to continue growing at a rapid clip.

While GenAI could add an estimated $4.4 trillion annually to global GDP, cybersecurity remains the second most-cited risk associated with its adoption as traditional cyber tools are ill-equipped to address the novel dangers it poses. This creates a critical need for security solutions that address GenAI-specific risks, without which businesses are unable to unlock the benefits of AI.

David Haber, Founder and CEO of Lakera, explains the urgency: “With the advent of GenAI, the old cybersecurity techniques aren’t sufficient. Enterprises now operate in a world where anyone who knows how to talk knows how to hack. Security solutions need to change but they can’t get in the way of user experience. There is a need for real-time AI security solutions that continuously evolve and enable amazing user experiences.”

David elaborates in his blog published today.

Cybersecurity-risks posed by LLMs

The spectrum of LLM vulnerabilities is both diverse and severe:

  • GenAI introduces prompt attacks as the most widely accessible hacking method. In the past, hackers needed to be able to code – but now, the only barrier to entry is being able to talk.
  • Prompt attacks can easily be used to manipulate GenAI so that a malicious actor can gain unauthorized access to a company’s systems, steal confidential data, take unauthorized actions, and generate harmful content.
  • AI “sleeper agents” can lie dormant until activated for malicious purposes. Jailbreaking techniques can compromise powerful models in mere minutes.
  • AI-targeted worms can bypass security measures, harvesting data and launching widespread attacks.
  • Most alarmingly, researchers have shown the ability to jailbreak a million LLM agents with a single image in just 27-31 chat rounds, demonstrating the potential for rapid, large-scale compromises.

Also Read: AiThority Interview with Nicole Janssen, Co-Founder and Co-CEO of AltaML

Lakera’s comprehensive approach to AI security delivers three key benefits for enterprises: protection that doesn’t slow down their AI applications; the ability to stay ahead of AI threats with continuously evolving intelligence; and centralized implementation of AI security controls.

The company’s growth comes at a pivotal moment in AI development. With the adoption of GenAI, users now direct software applications using natural language, and Lakera’s real-time AI security does not compromise application interactivity. To make implementation effortless for developers, the company’s API can be integrated with a single line of code, providing an ultra-low-latency security layer compatible with any GenAI model. Centralized controls allow security teams to customize application-specific policies and address emerging threats without code changes.

Lakera is uniquely positioned to tackle these fast-changing challenges, in part thanks to Gandalf – an AI educational platform created by the company which serves as the world’s largest AI red team. With over 250 thousand unique users worldwide, including companies such as Microsoft where it’s used in security training, Gandalf generates a real-time database of AI threats which is growing by 100 thousands of uniquely new attacks every day and keeps Lakera’s software up to date, ensuring continuous protection for customers. The 50+ million data points generated by Gandalf, combined with the founding team’s deep experience in building AI systems with real-time requirements, means Lakera’s customers are able to stay ahead of threats and deliver amazing user experiences.

Atomico Partner Sasha Vidoborskiy, who will join Lakera’s board, said: “Lakera has seen impressive commercial pull, winning customers such as Dropbox and a top 3 US bank, and already having more than 35% of Fortune 100 companies knock on their door. All those clients have an urgency to deploy GenAI applications into production, but can’t do it without protection in place. Easy integration, best-in-class performance, and low latency are why they pick Lakera. But most importantly, the company is led by David and his team, who are thought leaders in the space and have a deep understanding of what the adoption of AI means for new cybersecurity risks.”

“At Dropbox, ensuring the security of our systems and customers’ data is a top priority,” said Donald Tucker, Head of Corporate Development and Ventures at Dropbox. “Lakera’s team has extensive expertise and a deep understanding of the complex security challenges companies are facing with LLMs and Generative AI. Their advanced technology is helping companies like Dropbox safeguard against vulnerabilities these new technologies pose. We’re thrilled to strengthen our existing relationship with Lakera by investing in the future of the company and AI security.”

Lakera plans to use this funding round to invest further in its product development and expand its presence in the US, where it already has a foothold in Silicon Valley.

Also Read: AiThority Interview with Nicole Janssen, Co-Founder and Co-CEO of AltaML

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post Lakera Raises $20 Million Series A to Secure Generative AI Applications appeared first on AiThority.

]]>
Chekable Debuts Gen AI Platform for Patent Professionals, With Investment From NEC X https://aithority.com/machine-learning/chekable-debuts-gen-ai-platform-for-patent-professionals-with-investment-from-nec-x/ Thu, 25 Jul 2024 06:51:11 +0000 https://aithority.com/?p=574322 Chekable Debuts Gen AI Platform for Patent Professionals, With Investment From NEC X

Newest Graduate of Elev X! Ignite Venture Studio Chekable Inc. Utilizes AI to Streamline End-to-End Patent Filing Processes for Attorneys & Agents NEC X, the Silicon Valley innovation accelerator backed by NEC’s advanced technologies, has proudly announced the graduation of Chekable from its prestigious Elev X! Ignite venture studio program, in addition to a new investment from […]

The post Chekable Debuts Gen AI Platform for Patent Professionals, With Investment From NEC X appeared first on AiThority.

]]>
Chekable Debuts Gen AI Platform for Patent Professionals, With Investment From NEC X

Newest Graduate of Elev X! Ignite Venture Studio Chekable Inc. Utilizes AI to Streamline End-to-End Patent Filing Processes for Attorneys & Agents

NEC X, the Silicon Valley innovation accelerator backed by NEC’s advanced technologies, has proudly announced the graduation of Chekable from its prestigious Elev X! Ignite venture studio program, in addition to a new investment from NEC X to help scale Chekable’s transformative AI solution for inventors and patent professionals.

Also Read: Proactive Ways to Skill Up for AI

“Their guidance has enabled us to refine our technology and fast-track our entry into the market. With NEC X’s extensive ecosystem behind us, we are poised to revolutionize the patent process.”

Alongside this milestone, Chekable is now offering early access to its groundbreaking generative AI platform, designed to revolutionize the patent filing process. The first-of-its-kind solution automates and streamlines crucial yet time consuming aspects of patent applications, protection and prosecution, saving upwards of 70% or more time in drafting documents. This platform will revolutionize workflows for corporate inhouse IP teams, boutique patent law practices and multi-practice law firms.

“Chekable’s innovative approach to patent management epitomizes the forward-thinking innovation we champion at NEC X,” said Shintaro Matsumoto, President and CEO of NEC X. “Their comprehensive solution and use of AI redefines the patent process, making it more efficient and accessible. We expect impressive results from Chekable, as they continue to enhance and scale their transformative platform worldwide.”

Also Read: AiThority Interview with Nicole Janssen, Co-Founder and Co-CEO of AltaML

Currently, the patent application process is notoriously complex, time-consuming and siloed, resulting in inefficiencies and delays that can take years to complete a filing. While some solutions exist, very few cover the entire process. Chekable’s vision is to offer a comprehensive suite of tools that streamline the entire patent lifecycle in one end-to-end platform.

Adhering to local jurisdiction language requirements, Chekable’s advanced solution can accurately and efficiently generate patent applications, search and monitor potential infringement with robust semantic search, and even matches patent professionals with inventors in need of guidance.

Chekable’s co-founder and CEO, Nipun Kumar, understands the challenges and complexities of filing patents first-hand, having authored over 60 patents, including numerous AI solutions, for Fortune 500 companies. Additionally, Chekable’s CAIO and co-founder, Ashutosh Chapagain, is an NLP specialist having deep AI and LLM knowledge working with robotic startups and institutions, such as Indiana University.

“NEC X has been an extraordinary partner, offering us invaluable resources, mentorship, and unwavering support,” said Nipun Kumar, CEO and Founder of Chekable. “Their guidance has enabled us to refine our technology and fast-track our entry into the market. With NEC X’s extensive ecosystem behind us, we are poised to revolutionize the patent process.”

Chekable joined NEC X’s Elev X! Ignite program in 2023, as part of cohort Batch 10, having been selected from over 140 startup applicants based on the founding team’s experience and growth potential. During the program, NEC X provided not only hands-on support, funding, mentorship and skill-development workshops, but also involved several NEC overseas research labs to test Chekable’s solution, offering valuable feedback and support for product development to ensure a market-ready product.

Elev X! Ignite transforms early-stage innovators and founders into seed-ready startups, offering a distinctive blend of engineering expertise, strategic collaboration, R&D resources, and up to $200K in equity funding to help early-stage startups succeed and grow. During the six- to nine-month program, Chekable worked closely with NEC X’s engineers, innovation team, researchers, industry experts, and mentors to validate their business model, refine their product, and prepare for scalable market entry. During this process, they’ve worked with numerous in-house IP teams and patent law firms to gather the requirements, beta test and workshop their solution for the market.

With the completion of the Elev X! Ignite program, along with funding and continued support, Chekable is poised for rapid and significant growth.

Also Read: AiThority Interview with Nicole Janssen, Co-Founder and Co-CEO of AltaML

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

 

The post Chekable Debuts Gen AI Platform for Patent Professionals, With Investment From NEC X appeared first on AiThority.

]]>
Iterative’s New DataChain Enables Use of AI Models to Evaluate the Quality of Unstructured Data https://aithority.com/machine-learning/iteratives-new-datachain-enables-use-of-ai-models-to-evaluate-the-quality-of-unstructured-data/ Tue, 23 Jul 2024 13:58:10 +0000 https://aithority.com/?p=574268 Iterative’s New DataChain Enables Use of AI Models to Evaluate the Quality of Unstructured Data

Company accelerates AI development by offering new open-source tool for data curation and model evaluation at scale Iterative, the company dedicated to streamlining the workflow of artificial intelligence (AI) engineers and creator of widely-used open-source projects in MLOps, today announced the upcoming release of DataChain, a new open-source tool for processing and evaluating unstructured data. Also […]

The post Iterative’s New DataChain Enables Use of AI Models to Evaluate the Quality of Unstructured Data appeared first on AiThority.

]]>
Iterative’s New DataChain Enables Use of AI Models to Evaluate the Quality of Unstructured Data

Company accelerates AI development by offering new open-source tool for data curation and model evaluation at scale

Iterative, the company dedicated to streamlining the workflow of artificial intelligence (AI) engineers and creator of widely-used open-source projects in MLOps, today announced the upcoming release of DataChain, a new open-source tool for processing and evaluating unstructured data.

Also Read: AiThority Interview with Nicole Janssen, Co-Founder and Co-CEO of AltaML

According to McKinsey’s Global Survey on the state of AI published in early 2024, only 15 percent of surveyed companies have realized a meaningful effect of generative AI (GenAI) on their business to date. A large part of the problem lies in the challenge of processing unstructured data at scale and estimating the results which is traditionally cumbersome – and stems from the missing link between the structured data technologies and the newer AI workflows based in Python. While the (older) analytical databases provided full control over the data quality, unstructured multimodal data like text and images proved much harder to assess and improve at scale.

“The biggest challenge in adopting artificial intelligence in the enterprise today is the lack of practices and tools for data curation and generative AI evaluation that can ensure the quality of results,” said Dmitry Petrov, CEO of Iterative. “As the next step, we need AI models that can evaluate and improve AI models. So far this has only happened at the industry forefront – take a look at DeepMind’s AlphaGo training against itself, or OpenAI’s DALL-E3 curating its own dataset. Our goal is to change this.”

The proliferation of sophisticated AI foundational models opens the door to intelligent curation and data processing. However, the absence of easy solutions to wrangle unstructured data using AI models in easy-to-manage formats keeps the technology barrier high. In practice, most AI engineers are still building custom code for converting their JSON model responses, adapting them to databases, and running models in parallel with out-of-memory data.

Also Read: Proactive Ways to Skill Up for AI

DataChain democratizes the popular AI-based analytical capabilities like ‘large language models (LLMs) judging LLMs’ and multimodal GenAI evaluations, greatly leveling the playing field for data curation and pre-processing. DataChain can also store and structure Python object responses using the latest data model schemas – such as those utilized by leading LLM and AI foundational model providers.

Founded in 2018, Iterative creates developer tools for AI engineers. The company has recorded more than 20M downloads for its open-source software DVC and earned more than 18,000 stars on GitHub. Iterative now has more than 400 contributors across its different tools and over 20 customers in their enterprise SaaS including F500 companies like UBS. Iterative is backed by True Ventures, Afore Capital, and 468 Capital.

Also Read: Red Teaming is Crucial for Successful AI Integration and Application

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post Iterative’s New DataChain Enables Use of AI Models to Evaluate the Quality of Unstructured Data appeared first on AiThority.

]]>