AiThority https://aithority.com/ Artificial Intelligence | News | Insights | AiThority Wed, 14 Aug 2024 13:19:55 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://aithority.com/wp-content/uploads/2023/09/cropped-0-2951_aithority-logo-hd-png-download-removebg-preview-32x32.png AiThority https://aithority.com/ 32 32 ABBYY’s 2nd Annual Intelligent Automation Month Showcases AI’s Power to Speed Up Business https://aithority.com/machine-learning/abbyys-2nd-annual-intelligent-automation-month-showcases-ais-power-to-speed-up-business/ Wed, 14 Aug 2024 12:00:15 +0000 https://aithority.com/?p=575261 ABBYY's 2nd Annual Intelligent Automation Month Showcases AI's Power to Speed Up Business

During September, global innovation experts educate business and IT leaders how to increase efficiencies and ROI using AI The second annual Intelligent Automation Monthtakes place during September 2024 with online programs designed to help organizations from every industry understand the impact of purpose-built AI and how it can advance their business. Also Read: AiThority Interview with […]

The post ABBYY’s 2nd Annual Intelligent Automation Month Showcases AI’s Power to Speed Up Business appeared first on AiThority.

]]>
ABBYY's 2nd Annual Intelligent Automation Month Showcases AI's Power to Speed Up Business

During September, global innovation experts educate business and IT leaders how to increase efficiencies and ROI using AI

The second annual Intelligent Automation Monthtakes place during September 2024 with online programs designed to help organizations from every industry understand the impact of purpose-built AI and how it can advance their business.

Also Read: AiThority Interview with Kunal Purohit, President – Next Gen Services, Tech Mahindra

“Attendees will feel encouraged by the panel of experts featured during September and have actionable insights for how to plan, deploy and measure the ROI of their AI investments for their organizations.”

Weekly webinars will provide accessible and digestible information based on real-world applications, best practices, and current research that will help attendees know how to identify automation opportunities, navigate AI compliance and prepare for new regulations. Attendees will learn the benefits of key AI-powered intelligent automation technologies including process intelligence and intelligent document processing (IDP), including valuable input from customers, partners and technology influencers on how to strategically leverage AI and automation to maximize ROI.

Business and IT leaders and journalists are invited to join ABBYY in the following sessions to gain first-hand knowledge about overcoming common business challenges and achieving success with AI. These sessions will include industry experts alongside ABBYY leaders.

Also Read: AiThority Interview with Yair Amsterdam, CEO of Verbit 

  • September 5th: The Inevitable wave of AI regulations; balancing utility and harms. This crucial session will break down the latest global AI regulations and compliance best practices. ForHumanity, a public non-profit organization focusing on independent auditing of AI systems, will provide important information to attendees on ensuring safe and trustworthy AI methods, along with cybersecurity considerations from ABBYY.
  • September 12th: E-invoicing is coming! Is your organization ready?
    You will be ready after this webinar. Featuring Billentis, we’ll enlighten you on the current state of e-invoicing globally, with specific focus on Europe. We’ll discuss trend projections, the e-invoicing process, solutions involved (e-invoicing platforms, PEPPOL, etc.), and how purpose-built IDP aligns with business needs for efficient invoice processing.
  • September 19th: Panel with innovation leaders. This fireside chat with executives from global financial institution, ING, and the Reveal Group, a global intelligent automation services company, will enlighten you on the latest AI tech trends and show the real-world ROI of using purpose-built AI to solve common business challenges.
  • September 26th: Take the guess work out of your intelligent automation deployment with process intelligence. Doculabs and ABBYY will discuss how to use process mining and task mining to give you data-driven insights to ensure you are leveraging AI to maximize operational efficiency.

“Intelligent Automation Month is an important industry initiative that puts a spotlight on what businesses can achieve with purpose-built AI,” commented Bruce Orcutt, CMO at ABBYY. “Attendees will feel encouraged by the panel of experts featured during September and have actionable insights for how to plan, deploy and measure the ROI of their AI investments for their organizations.”

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post ABBYY’s 2nd Annual Intelligent Automation Month Showcases AI’s Power to Speed Up Business appeared first on AiThority.

]]>
Luminoso announces Technology Partnership with Qlik and New Leadership Appointments https://aithority.com/ai-machine-learning-projects/luminoso-announces-technology-partnership-with-qlik-and-new-leadership-appointments/ Wed, 14 Aug 2024 08:00:28 +0000 https://aithority.com/?p=575241 Luminoso announces Technology Partnership with Qlik and New Leadership Appointments

As an official technology partner of Qlik in AI, Luminoso is excited to seamlessly deliver the power of its technology to Qlik users worldwide. Luminoso, a leader in AI-driven text analytics, is excited to announce a partnership with Qlik, a global leader for data integration, data analytics and business intelligence solutions. This partnership is set […]

The post Luminoso announces Technology Partnership with Qlik and New Leadership Appointments appeared first on AiThority.

]]>
Luminoso announces Technology Partnership with Qlik and New Leadership Appointments

As an official technology partner of Qlik in AI, Luminoso is excited to seamlessly deliver the power of its technology to Qlik users worldwide.

Luminoso, a leader in AI-driven text analytics, is excited to announce a partnership with Qlik, a global leader for data integration, data analytics and business intelligence solutions. This partnership is set to transform the field of analytics by integrating Luminoso’s advanced text analytics into Qlik’s leading BI tool, providing professionals with unparalleled insights and decision-making capabilities. As an official technology partner of Qlik in AI, Luminoso is excited to seamlessly deliver the power of its technology to Qlik users worldwide.

Business analysts and customer experience professionals who try to analyze their customers and markets solely through the lens of structured data limit their ability to derive the broadest and best sets of insights about customer behavior. In fact, this structured data represents only 20% of enterprise data according to multiple analyst estimates. The remaining 80% of data is unstructured and can contain valuable customer insights through sources such as product reviews, discussion forums, trouble tickets, and customer emails.

By integrating Luminoso’s ability to understand unstructured data and structured consumer sentiment with Qlik’s depth of analytics, companies are now able to analyze and understand previously disparate data sources and types to gain a more complete view and thus achieve a competitive advantage.

“Structured data can point to a decline in sales that’s already happened. Understanding and tracking customer sentiment through unstructured data can serve as a leading indicator of financial performance but also can help shape future product direction and the entire customer experience. This is why we believe the combined capabilities of Luminoso and Qlik in a single environment are profoundly compelling,” said Mark Zides, CEO & President of Luminoso.

Also Read: AiThority Interview with Seema Verma, EVP and GM, Oracle Health and Life Sciences

“Luminoso deciphers and quantifies human feedback. Luminoso then integrates its results along with its visual analytics within Qlik Sense® for a new level of insight that was previously not available. This combination will enable comprehensive customer & market intelligence, providing an all-round understanding of consumer sentiments and preferences ” said Hugo Sheng, Senior Director of Partner Engineering.

Luminoso’s solutions were spotlighted at Qlik Connect 2024, where we engaged with hundreds of global business leaders & analysts from HR, Retail, technology, healthcare, and marketing sectors.

“This integration allows for users to perform sentiment analysis and trend identification on a concept level within Qlik dashboards. This enhanced analysis helps businesses make informed decisions, optimize strategies, and drive better outcomes,” said Dalton Ruer, Senior Solutions Architect – Partner Engineering at Qlik.

Announcing Prathik N Sunku as Head of Partnerships & Alliances and David Rautkys as Head of Corporate Development & Innovation

“We are thrilled to announce the appointments of Prathik N Sunku as Luminoso’s new Head of Partnerships & Alliances, and David Rautkys as Head of Corporate Development & Innovation. Prathik brings a wealth of experience in fostering strategic partnerships and driving growth in the analytics and technology sectors, while David’s expertise in innovation and corporate development will be instrumental in expanding Luminoso’s reach and enhancing its impact in the market,” said Mark Zides.

​​”Our AI’s ability to deliver in-depth, nuanced comprehension allows businesses to quickly familiarize themselves with customer and market dynamics by analyzing real-time feedback from platforms like Reddit, Amazon, and Glassdoor, directly within Qlik dashboards,” said Prathik N Sunku.

“Analysts and consultants can explore new dimensions using feedback data organized as structured insights from discussions and conversations across different channels, all within Qlik workflows. This integration offers sales teams deeper insights into customer behavior and preferences, enabling more targeted and effective sales strategies,” added David Rautkys.

Also Read: AiThority Interview with Kunal Purohit, President – Next Gen Services, Tech Mahindra

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post Luminoso announces Technology Partnership with Qlik and New Leadership Appointments appeared first on AiThority.

]]>
Bitdeer AI Unveils Advanced AI Training Platform with Serverless GPU Infrastructure for Scalable and Efficient AI/ML Inference https://aithority.com/machine-learning/bitdeer-ai-unveils-advanced-ai-training-platform-with-serverless-gpu-infrastructure-for-scalable-and-efficient-ai-ml-inference/ Wed, 14 Aug 2024 07:46:32 +0000 https://aithority.com/?p=575231 Bitdeer AI Unveils Advanced AI Training Platform with Serverless GPU Infrastructure for Scalable and Efficient AI/ML Inference

Bitdeer AI, part of Bitdeer Technologies Group, a leading AI Cloud service provider, has announced the launch of its advanced AI Training Platform, designed to provide fast and scalable AI/ML inference with serverless GPU infrastructure. With the newest AI Training Platform, Bitdeer AI becomes one of the first NVIDIA Cloud Service Providers (CSP) in Asia […]

The post Bitdeer AI Unveils Advanced AI Training Platform with Serverless GPU Infrastructure for Scalable and Efficient AI/ML Inference appeared first on AiThority.

]]>
Bitdeer AI Unveils Advanced AI Training Platform with Serverless GPU Infrastructure for Scalable and Efficient AI/ML Inference

Bitdeer AI, part of Bitdeer Technologies Group, a leading AI Cloud service provider, has announced the launch of its advanced AI Training Platform, designed to provide fast and scalable AI/ML inference with serverless GPU infrastructure. With the newest AI Training Platform, Bitdeer AI becomes one of the first NVIDIA Cloud Service Providers (CSP) in Asia to offer both cloud service and an AI training platform.

The Bitdeer AI Training Platform empowers everyone to build, train, and fine-tune AI models at scale through notebooks and organized resources on a project basis. Based on the pre-configured guides and customizable parameters, the innovative platform simplifies the process of developing and refining AI models, making them accessible to a wider audience. It further allows different teams within the same organization to collaboratively build and develop AI models without the need to manage their own servers, setting a new benchmark in efficiency and performance.

Also Read: AiThority Interview with Kunal Purohit, President – Next Gen Services, Tech Mahindra

High-Performance AI Infrastructure

The newly announced platform offers seamless access to high-performance AI infrastructure and resources of NVIDIA DGX SuperPOD with H100 GPUs, DDN Storage, and InfiniBand Networks. It also improves the efficiency and scalability of AI/ML training processes by utilizing multi-GPUs across various servers simultaneously. By distributing the workload across several GPUs, Bitdeer AI’s services can handle extensive and sophisticated training tasks, making it the optimal choice for organizations aiming to accelerate their AI initiatives.

Addressing Key Business Challenges

  • Optimizing Development Costs: With Bitdeer AI, businesses can optimize costs through a pay-as-you-go model, only being charged when notebooks are in service mode. This approach ensures that organizations only pay for the resources they use, making AI development more cost-effective.
  • Simplifying Complex GPU Infrastructure Setups: The serverless infrastructure provides a comprehensive integrated development environment for ML, including pre-built algorithms and support for popular frameworks like TensorFlow and PyTorch. This significantly reduces the complexity and time required to develop and train ML models, streamlining the AI development process.
  • Ensuring Reproducibility and Environment Consistency: Bitdeer AI ensures consistency and reproducibility in the build environment, crucial for managing ML model deployment. This consistency prevents unexpected errors when restarting CI/CD jobs or migrating from one platform to another, avoiding costly build errors in long-running ML jobs.

Bitdeer AI collaborated with a software engineering team from the SMU School of Computing and Information Systems to test, verify, and fine-tune the platform, ensuring its robustness and effectiveness. Looking ahead, Bitdeer AI plans to collaborate with NVIDIA to enhance the AI Training Platform by integrating with the NVIDIA AI Enterprise (NVAIE) cloud services such as NIM. This collaboration will enable businesses to customize, test, and scale AI agents efficiently, further solidifying Bitdeer AI’s commitment to providing top-tier AI solutions.

Also Read: AiThority Interview with Yair Amsterdam, CEO of Verbit 

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post Bitdeer AI Unveils Advanced AI Training Platform with Serverless GPU Infrastructure for Scalable and Efficient AI/ML Inference appeared first on AiThority.

]]>
Codeium Introduces Cortex: A First-of-its-Kind Code Reasoning Engine Available to Developers Now https://aithority.com/machine-learning/codeium-introduces-cortex-a-first-of-its-kind-code-reasoning-engine-available-to-developers-now/ Wed, 14 Aug 2024 07:43:51 +0000 https://aithority.com/?p=575232 Codeium Introduces Cortex: A First-of-its-Kind Code Reasoning Engine Available to Developers Now

Kleiner Perkins-backed Codeium is unveiling a new code reasoning engine that increases retrieval recall by 200% over state-of-the-art embedding systems Cortex supports large-scale reasoning for code generation, reviews, and knowledge transfer Codeium is able to execute this more powerful approach 40x faster and 1000x cheaper than doing so with third-party APIs Codeium, an AI-powered code […]

The post Codeium Introduces Cortex: A First-of-its-Kind Code Reasoning Engine Available to Developers Now appeared first on AiThority.

]]>
Codeium Introduces Cortex: A First-of-its-Kind Code Reasoning Engine Available to Developers Now

Kleiner Perkins-backed Codeium is unveiling a new code reasoning engine that increases retrieval recall by 200% over state-of-the-art embedding systems

Cortex supports large-scale reasoning for code generation, reviews, and knowledge transfer

Codeium is able to execute this more powerful approach 40x faster and 1000x cheaper than doing so with third-party APIs

Codeium, an AI-powered code acceleration toolkit, announced the launch of Cortex, an AI-powered reasoning engine that will give developers a more powerful engine to do more, not just another naive copilot. Cortex equips developers with the advanced tools needed to manage and solve complex coding problems more efficiently.

Reasoning is key to AI achieving human or superhuman-level intelligence. As the focus shifts from AI copilots to the next iteration of AI models, Cortex represents a significant leap forward. Instead of assisting with simple, isolated tasks like standard copilots, Cortex supports large-scale reasoning, making code generation, reviews, and knowledge transfer, and Codeium is able to execute this more powerful approach 40x faster and 1000x cheaper than doing so with third-party APIs.

Also Read: AiThority Interview with Kunal Purohit, President – Next Gen Services, Tech Mahindra

Cortex is enabled by a trio of advancements:

  • Proprietary code LLMs: This mimics the familiar retrieval, planning, and execution workflow that developers use to approach tasks.
  • Scalable context handling: Tested on over 100M tokens of code without quality loss, Cortex can reason over vast amounts of context.
  • Lightning-fast processing: Distributed computing allows Cortex to run in seconds rather than hours, enabling real-time human-in-the-loop interactions.

These apply no matter the task, whether it’s writing code, refactoring code, reviewing code, explaining code, and so on, streamlining the process from hours to minutes, proving a fundamental breakthrough in reasoning for foundational models.

Customers familiar with Codeium will experience 2x the recall of relevant contextual information compared to Codeium’s past systems and 3x the recall of state-of-the art embedding + third party API systems.

“Individuals have not hit the limits of their potential. Cortex is a huge step forward in helping individual developers solve more complex problems faster, allowing them to approach challenges in a more powerful way,” said Varun Mohan, CEO of Codeium. “Our mission at Codeium has always been to push the boundaries of what’s possible in software development. With Cortex, we’re not just dreaming bigger, we’re delivering bigger.”

Since launching 18 months ago, Codeium has continued to build for its customers, with significant time savings and efficiency gains reported up to 25% percent. 44.6% of all newly committed code is generated and unedited from Codeium. Currently, Codeium serves 600,000 active users and over a thousand customers, such as Dell, Zillow, and Anduril.

The Cortex experience is available today and ready to use for all Enterprise SaaS customers, integrated into the backbone of Codeium’s existing products such as Autocomplete and Chat. In the near future, Cortex will be exposed in new products that will enable novel ways of building complex software with a human-in-the-loop.

Developers and organizations are invited to see firsthand its impact on AI-driven software development.

Rather than a theoretical improvement, Codeium has already demonstrated the value of this new reasoning engine in practice, integrating it into its widely used existing products as well as newly launched products such as Forge, an AI-powered code review assistant.

Also Read: AiThority Interview with Yair Amsterdam, CEO of Verbit 

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post Codeium Introduces Cortex: A First-of-its-Kind Code Reasoning Engine Available to Developers Now appeared first on AiThority.

]]>
ClearML Launches New End-to-End AI Platform for Complete AI Lifecycle Management https://aithority.com/ai-machine-learning-projects/clearml-launches-new-end-to-end-ai-platform-for-complete-ai-lifecycle-management/ Wed, 14 Aug 2024 07:42:52 +0000 https://aithority.com/?p=575234 ClearML Launches New End-to-End AI Platform for Complete AI Lifecycle Management

Accelerating GenAI Adoption with an Open Source Platform for Seamless AI, LLMOps, and MLOps Development, Deployment, and Resource Management ClearML, the leading solution for unleashing AI in the enterprise, today announced the launch of its expansive end-to-end AI Platform, designed to streamline AI adoption and the entire development lifecycle. This unified, open source platform supports every […]

The post ClearML Launches New End-to-End AI Platform for Complete AI Lifecycle Management appeared first on AiThority.

]]>
ClearML Launches New End-to-End AI Platform for Complete AI Lifecycle Management

Accelerating GenAI Adoption with an Open Source Platform for Seamless AI, LLMOps, and MLOps Development, Deployment, and Resource Management

ClearML, the leading solution for unleashing AI in the enterprise, today announced the launch of its expansive end-to-end AI Platform, designed to streamline AI adoption and the entire development lifecycle. This unified, open source platform supports every phase of AI development, from lab to production, allowing organizations to leverage any model, dataset, or architecture at scale. ClearML’s platform integrates seamlessly with existing tools, frameworks, and infrastructures, offering unmatched flexibility and control for AI builders and DevOps teams building, training, and deploying models at every scale on any AI infrastructure.

With this release, ClearML becomes the most flexible, wholly agnostic, end-to-end AI platform in the marketplace today in that it is:

– Silicon-agnostic: supporting NVIDIA, AMD, Intel, ARM, and other GPUs
– Cloud-agnostic: supporting Azure, AWS, GCP, Genesis Cloud, and others, as well as multi-cloud
– Vendor-agnostic: supporting the most popular AI and machine learning frameworks, libraries, and tools, such as PyTorch, Keras, Jupyter Notebooks, and others
– Completely modular: Customers can use the full platform alone or integrate it with their existing AI/ML frameworks and tools such as Grafana, Slurm, MLflow, Sagemaker, and others to address GenAI, LLMOps, and MLOps use cases and to maximize existing investments.

“ClearML’s end-to-end AI platform is crucial for organizations looking to streamline their AI operations, reduce costs, and enhance innovation – while safeguarding their competitive edge and future-proofing their AI investments by using our completely cloud-, vendor-, and silicon- agnostic platform,” said Moses Guttmann, Co-founder and CEO of ClearML. “By providing a comprehensive, flexible, and secure solution, ClearML empowers teams to build, train, and deploy AI applications more efficiently, ultimately driving better business outcomes and faster time to production at scale.”

The ClearML end-to-end AI Platform encompasses newly expanded capabilities and integrates previous stand-alone products, and includes:

Also Read: AiThority Interview with Seema Verma, EVP and GM, Oracle Health and Life Sciences

– A GenAI App Engine, designed to make it easy for AI teams to build and deploy GenAI applications, maximizing the potential and the value of their LLMs.
– An Open Source AI Development Center, which offers collaborative experiment management, powerful orchestration, easy-to-build data stores, and one-click model deployment. Users can develop their ML code and automation with ease, ensuring their work is reproducible and scalable.
– An AI Infrastructure Control Plane, helping customers manage, orchestrate, and schedule GPU compute resources effortlessly, whether on-premise, in the cloud, or in hybrid environments. These new capabilities, which were also introduced today in a separate announcement, maximize GPU utilization and provide fractional GPUs, as well as multi-tenancy and extensive b****** and chargeback capabilities that offer precise cost control, empowering customers to optimize their compute resources efficiently.

ClearML’s AI Platform enables customers to use any type of machine learning, deep learning, or large language model (LLM) with any dataset, in any architecture, at scale. AI Builders can seamlessly develop their ML code and automation, ensuring their work is reproducible and scalable. That’s important, because it addresses several critical challenges faced by organizations in developing, deploying, and managing AI solutions in the most complex and demanding environments. Here’s why it matters:

Unified End-to-end Workflow: ClearML provides a seamless workflow that integrates all stages of AI development, from data ingestion and model training to deployment and monitoring. This unified approach eliminates the need for multiple disjointed tools, simplifying the AI adoption and development process.

Superior Efficiency and ROI: ClearML’s new AI infrastructure orchestration and management capabilities help customers execute 10X more AI and HPC workloads on their existing infrastructure.

Interoperability: The platform is designed to work with any machine learning framework, dataset, or infrastructure, whether on-premise, in the cloud, or in a hybrid environment. This flexibility ensures that organizations can use their preferred tools and avoid vendor lock-in.

Orchestration and Automation: ClearML automates many aspects of AI development, such as data preprocessing, model training, and pipeline management. This ensures full utilization of compute resources for multi-instance GPUs and job scheduling, prioritization, and quotas. ClearML empowers team members to schedule resources on their own with a simple and unified interface, enabling them to self-serve with more automation and greater reproducibility.

Scalable Solutions: The platform supports scalable compute resources, enabling organizations to handle large datasets and complex models efficiently. This scalability is crucial for keeping up with the growing demands of AI applications.

Optimized Resource Utilization: By providing detailed insights and controls over compute resource allocation, ClearML helps organizations maximize their GPU and cloud resource utilization. This optimization leads to significant cost savings and prevents resource wastage.

Budget and Policy Control: ClearML offers tools for managing cloud compute budgets, including autoscalers and spillover features. These tools help organizations predict and control their monthly cloud expenses, ensuring cost-effectiveness, by providing advanced user management for superior quota/over-quota management, priority, and granular control of compute resources allocation policies.

Enterprise-Grade Security: The platform includes robust security features such as role-based access control, SSO authentication, and LDAP integration. These features ensure that data, models, and compute resources are securely managed and accessible only to authorized users.

Real-Time Collaboration: The platform facilitates real-time collaboration among team members, allowing them to share data, models, and insights effectively. This collaborative environment fosters innovation and accelerates the development process.

Also Read: AiThority Interview with Kunal Purohit, President – Next Gen Services, Tech Mahindra

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post ClearML Launches New End-to-End AI Platform for Complete AI Lifecycle Management appeared first on AiThority.

]]>
DataClarity Unveils Advanced GenAI Capabilities in Embedded Analytics Platform https://aithority.com/machine-learning/dataclarity-unveils-advanced-genai-capabilities-in-embedded-analytics-platform/ Tue, 13 Aug 2024 18:41:30 +0000 https://aithority.com/?p=575211 DataClarity Unveils Advanced GenAI Capabilities in Embedded Analytics Platform

Revolutionizing Custom Analytics for ISVs and SaaS Providers with Cutting-Edge GenAI Integration DataClarity Corporation, a leading provider of advanced analytics solutions, is excited to announce the launch of its new Generative AI (GenAI) capabilities within its embedded analytics platform. This groundbreaking update is specifically designed to empower Independent Software Vendors (ISVs) and Software-as-a-Service (SaaS) providers to embed […]

The post DataClarity Unveils Advanced GenAI Capabilities in Embedded Analytics Platform appeared first on AiThority.

]]>
DataClarity Unveils Advanced GenAI Capabilities in Embedded Analytics Platform

Revolutionizing Custom Analytics for ISVs and SaaS Providers with Cutting-Edge GenAI Integration

DataClarity Corporation, a leading provider of advanced analytics solutions, is excited to announce the launch of its new Generative AI (GenAI) capabilities within its embedded analytics platform. This groundbreaking update is specifically designed to empower Independent Software Vendors (ISVs) and Software-as-a-Service (SaaS) providers to embed customized, branded analytics into their applications, offering unparalleled flexibility, robust enterprise security, and governance, all with a low total cost of ownership.

Also Read: AiThority Interview with Seema Verma, EVP and GM, Oracle Health and Life Sciences

Empowering Custom Analytics Solutions

In today’s data-driven world, the ability to make informed decisions is more critical than ever. DataClarity’s advanced GenAI capabilities leverage Natural Language Processing (NLP) and GenAI technologies to provide organizations with highly tailored analytics solutions that meet their specific operational and strategic needs. The new GenAI templating engine enables the creation of bespoke analytics tools that transcend the limitations of traditional analytics, delivering deeper insights and actionable intelligence.

Key Features and Benefits

  • Chat With Your Data: Engage with your data like never before using our Conversational BI feature. This allows users to interact directly with their data through natural language queries, making it easier to explore, analyze, and extract insights in real time. Whether you’re asking a simple question or conducting a complex analysis, the platform provides instant, context-aware responses that drive informed decision-making.
  • Customizable GenAI Templates: Create and deploy specialized GenAI templates tailored to various industry needs and organizational challenges, such as generating summaries and actionable steps from adjuster notes in c***** management applications.
  • Integration with Leading LLMs: Seamlessly integrate with top-tier Large Language Models (LLMs) like GPT-4, LLaMA, Claude, Mixtral, and more. You can start with a pre-defined model like GPT-4o and augment or fine-tune it with your enterprise data, or build your own custom LLM.
  • Embedded Analytics Capabilities: Build on a powerful platform that allows you to connect to any data anywhere, combine, prepare, curate, and catalog data, create GenAI-ready data semantic layers, author self-service dashboards with GenAI, integrate enterprise security and SSO, and embed analytics everywhere with robust APIs.
  • Cloud-Ready and Cost-Efficient: Deploy cloud-ready architecture anywhere, benefiting from n****** commercial software that is production-ready with available 24/7 support, ensuring your solutions are always up and running with minimal overhead.

“The advent of services like ChatGPT has set new expectations for application end users. They now demand the ability to conversationally interact with, analyze, and obtain answers and recommendations from their data. Our new GenAI capabilities address this evolving need, offering ISVs and SaaS providers the most powerful platform to rapidly embed both GenAI and embedded analytics into their applications. There is no better solution for delivering these advanced capabilities with minimal effort and investment,” said Mark Mueller, CEO of DataClarity Corporation. “This launch underscores our commitment to innovation and mission to make cutting-edge analytics accessible and affordable for all.”

Also Read: AiThority Interview with Kunal Purohit, President – Next Gen Services, Tech Mahindra

Dragos Georgescu, CTO of DataClarity Corporation, added, “The flexibility and power of our GenAI platform are game-changers for ISVs and SaaS providers. By integrating LLM’s with customizable templates, we enable our customers to embed and tailor GenAI analytics for their specific application needs while harnessing the full potential of their data. Our platform is designed to scale with our customers, ensuring they stay ahead of the curve in an ever-evolving GenAI analytics landscape.”

Also Read: AiThority Interview with Yair Amsterdam, CEO of Verbit 

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post DataClarity Unveils Advanced GenAI Capabilities in Embedded Analytics Platform appeared first on AiThority.

]]>
GroupBy Launches Enrich AI: Revolutionizing Product Data Attribution with Advanced AI and Proprietary Global Taxonomy https://aithority.com/machine-learning/groupby-launches-enrich-ai-revolutionizing-product-data-attribution-with-advanced-ai-and-proprietary-global-taxonomy/ Tue, 13 Aug 2024 15:11:19 +0000 https://aithority.com/?p=575197 GroupBy Launches Enrich AI: Revolutionizing Product Data Attribution with Advanced AI and Proprietary Global Taxonomy

GroupBy Inc., a leading provider of AI-first Search and Product Discovery solutions for B2C and B2B retail, today unveiled Enrich AI, a cutting-edge product data enrichment platform designed to revolutionize product information management. Leveraging sophisticated generative AI algorithms and GroupBy’s years of deep retail expertise, Enrich AI empowers retailers to create rich shopping experiences with […]

The post GroupBy Launches Enrich AI: Revolutionizing Product Data Attribution with Advanced AI and Proprietary Global Taxonomy appeared first on AiThority.

]]>
GroupBy Launches Enrich AI: Revolutionizing Product Data Attribution with Advanced AI and Proprietary Global Taxonomy


GroupBy Inc., a leading provider of AI-first Search and Product Discovery solutions for B2C and B2B retail, today unveiled Enrich AI, a cutting-edge product data enrichment platform designed to revolutionize product information management. Leveraging sophisticated generative AI algorithms and GroupBy’s years of deep retail expertise, Enrich AI empowers retailers to create rich shopping experiences with enhanced product data – driving substantial revenue growth. “Enrich AI represents a quantum leap in data enrichment and product data management,” stated Roland Gossage, CEO at GroupBy. “By seamlessly integrating advanced AI with our deep domain expertise in retail, we are delivering a solution that addresses the critical challenges retailers face in managing and optimizing their product data. This sophisticated self-serve platform enables B2C and B2B retailers to achieve unprecedented levels of data accuracy, consistency, and completeness, ultimately optimizing search relevance, merchandising effectiveness, and overall business performance.”

Also Read: AiThority Interview with Kunal Purohit, President – Next Gen Services, Tech Mahindra

“Enrich AI represents a quantum leap in data enrichment and product data management”

Enrich AI harnesses the power of AI to revolutionize product data management. By automating complex data enrichment tasks, Enrich AI accurately extracts, standardizes, and enriches product data based on attributes, descriptions, and images at scale. This streamlined process minimizes costly errors, accelerates time-to-market, and frees up valuable resources for strategic initiatives, while making it easier for retailers to manage their catalogs.

Moreover, Enrich AI leverages GroupBy’s extensive proprietary Global Taxonomy Library to ensure consistent and accurate product categorization. By applying a robust, industry-specific taxonomy, developed by GroupBy’s breadth and years of experience working with various retail verticals, retailers can enhance product discoverability, improve search relevance, and optimize merchandising efforts. This granular level of product classification empowers retailers to deliver highly targeted attributes and values for product recommendations and personalized shopping experiences.

To further enhance data reliability and integrity, Enrich AI employs advanced algorithms to detect and correct errors, inconsistencies, and missing information within product data. By improving data quality, retailers can reduce costly returns, enhance customer satisfaction, and build trust in their brand.

Ultimately, by optimizing product discoverability and driving business growth, Enrich AI enables retailers to improve the quality of their data, personalize customer experiences, and increase customer satisfaction. With enriched product data, retailers can unlock new revenue streams, gain a competitive advantage, and thrive in today’s dynamic retail landscape.

Also Read: AiThority Interview with Seema Verma, EVP and GM, Oracle Health and Life Sciences

Key technical features of Enrich AI include:

  • Advanced AI Algorithms: Utilize state-of-the-art generative AI models for accurate and efficient data enrichment.
  • Product Attribute Intelligence: Fast and efficient attribute enrichment from product image and text data in a single step.
  • Deep Retail Expertise: Built on a foundation of industry knowledge and best practices across retail verticals.
  • Taxonomy Enrichment: Manage taxonomies tailored to specific product categories based on business needs.
  • Product Catalog Management: Automated management of catalogs to maintain enrichment for products that churn.
  • Robust PIM/PXM Integration: Seamlessly integrate with existing product information management systems.
  • Quality Assurance: Automated data quality checks ensure data accuracy and consistency through continuous monitoring and validation.
  • Product Transformations: Involve humans in the loop to quickly curate and edit product values across the catalog.
  • User-Friendly Interface: Empower users with intuitive tools for easy data management and enrichment.
  • Simplified Integration: Seamlessly access Enrich AI functionality directly from the GroupBy Command Center, eliminating the need for manual data injection and streamlining the enrichment process.
  • Expert support: Access professional services for seamless catalog implementation and data enrichment aligned with the proprietary Global Taxonomy Library.

By combining AI-powered precision with GroupBy’s deep retail expertise, Enrich AI delivers unparalleled product data enrichment capabilities. Retailers can now unlock the full potential of their product data, gain a competitive edge, and drive long-term business success.

Also Read: AiThority Interview with Yair Amsterdam, CEO of Verbit 

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post GroupBy Launches Enrich AI: Revolutionizing Product Data Attribution with Advanced AI and Proprietary Global Taxonomy appeared first on AiThority.

]]>
Foundry Launches New Platform to Boost AI Compute Access and Efficiency for Developers https://aithority.com/machine-learning/foundry-launches-new-platform-to-boost-ai-compute-access-and-efficiency-for-developers/ Tue, 13 Aug 2024 15:08:02 +0000 https://aithority.com/?p=575196 Foundry Launches New Platform to Boost AI Compute Access and Efficiency for Developers

Foundry Cloud Platform democratizes AI development and solves the GPU shortage with the first real-time compute market purpose-built for AI  Foundry—an emerging cloud provider founded by alumni from Google DeepMind’s core Deep Learning team–launched Foundry Cloud Platform, a real-time market and orchestration engine for GPU compute that simplifies access to the infrastructure required to build and […]

The post Foundry Launches New Platform to Boost AI Compute Access and Efficiency for Developers appeared first on AiThority.

]]>
Foundry Launches New Platform to Boost AI Compute Access and Efficiency for Developers

Foundry Technologies

Foundry Cloud Platform democratizes AI development and solves the GPU shortage with the first real-time compute market purpose-built for AI 

Foundry—an emerging cloud provider founded by alumni from Google DeepMind’s core Deep Learning team–launched Foundry Cloud Platform, a real-time market and orchestration engine for GPU compute that simplifies access to the infrastructure required to build and deploy AI. Foundry’s platform reduces operational complexity and improves compute cost efficiency by up to 6x, putting AI development within reach of more organizations and accelerating global AI innovation.

Also Read: AiThority Interview with Seema Verma, EVP and GM, Oracle Health and Life Sciences

The AI boom has made GPU servers one of the world’s most strategic commodities. Surging demand has outpaced the capacity of traditional public clouds, prompting tech giants and AI startups alike to spend billions to independently secure essential hardware for AI development. Industry-standard long-term contracts and unreliable infrastructure fuel overprovisioning to guarantee capacity and redundancy, further reducing broader access. This arms race for GPU compute ownership has masked a critical issue and opportunity: existing GPUs are vastly underutilized due to the unique compute requirements of AI development.

“The GPU compute market as it exists today is one of the most inefficient commodity markets in history, and it’s directly limiting critical AI innovations that will benefit society,” explains Jared Quincy Davis, founder and CEO of Foundry. “The majority of AI research and development teams struggle to access affordable and reliable compute for their workloads, while exceptionally well-funded organizations are forced to purchase long-term GPU reservations that they rarely utilize to maximum capacity. Foundry Cloud Platform addresses this market failure by aggregating and redistributing idle compute capacity to enable faster breakthroughs while improving return on GPU investments.”

Foundry Cloud Platform offers AI teams of every scale a more efficient and approachable way to access GPU compute, optimizing performance, cost efficiency, and reliability.

Foundry Cloud Platform aggregates compute into a single, dynamically-priced pool that offers GPU capacity in two ways, optimized for the specific and unique needs of different AI workloads:

  • Resellable reserved instances. AI teams have self-serve access to reserve short-term capacity from Foundry’s pool of GPU virtual machines. Rather than pay for fixed, long-term contracts, customers can guarantee compute for predictable workloads by reserving interconnected clusters from the pool for as little as three hours. Customers can further increase cost-efficiency by reselling any idle capacity from their reservations. For example, if a customer reserves 128 NVIDIA H100s and sets aside 16 as “healing buffer” nodes, they can temporarily relist those 16 nodes on the market, where they generate credits until the customer recalls them or the initial reservation period ends. Reserved usage is optimal for pre-planned workloads like training runs and critical day-to-day developer tasks like verification and debugging.
  • Spot instances. All unreserved and relisted compute on the platform is available as spot instances that users can bid on for interrupt-tolerant workloads like model inference, hyperparameter tuning, and fine-tuning.

Foundry Cloud Platform uses auction theory to set market-driven prices for reserved and spot compute based on real-time supply and demand. Whenever prices become too high, Foundry increases the overall GPU capacity of the platform, stabilizing the market.

The platform also offers Kubernetes workload orchestration, which eliminates manual scheduling by programmatically adding reserved and spot instances to a managed Kubernetes cluster. Leveraging Kubernetes clusters through Foundry Cloud Platform allows AI development teams to optimize price-performance and minimize inference latency during traffic spikes by quickly scaling capacity horizontally.

Infinite Monkey, an AI startup developing architectures for AGI, uses Foundry Cloud Platform to access a variety of state-of-the-art GPUs without overprovisioning. “With Foundry Cloud Platform, we made  actionable discoveries in hours, not weeks,” says Matt Wheeler, Research Engineer at Infinite Monkey. “When we believe we could benefit from additional compute, we just turn it on. When we need to pause to study our results and design the next experiment, we turn it off. Because we aren’t locked into a long-term contract, we have the flexibility to experiment with a variety of GPUs and empirically determine how to get the b*********-performance for our workload.”

Also Read: AiThority Interview with Kunal Purohit, President – Next Gen Services, Tech Mahindra

“Foundry Cloud Platform has accelerated science at Arc,” notes Patrick Hsu, Co-Founder and Core Investigator at Arc Institute –  a nonprofit research organization studying complex diseases, including cancer, neurodegeneration, and immune dysfunction. “Our machine learning work brings demanding performance infrastructure needs, and Foundry delivers. With Foundry, we can guarantee that our researchers have exactly the compute they need, when they need it, without procurement friction.”

Foundry is building its platform to meet the highest standards for security and compliance to ensure maximum protection for customer data, achieving SOC 2 Type II certification earlier this year.

Also Read: AiThority Interview with Yair Amsterdam, CEO of Verbit 

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post Foundry Launches New Platform to Boost AI Compute Access and Efficiency for Developers appeared first on AiThority.

]]>
AiThority Interview with Trevor Lanting, Chief Development Officer D-Wave https://aithority.com/machine-learning/aithority-interview-with-trevor-lanting-chief-development-officer-dwave/ Tue, 13 Aug 2024 13:10:51 +0000 https://aithority.com/?p=574947 AiThority Interview with Trevor Lanting, Chief Development Officer D-Wave

The post AiThority Interview with Trevor Lanting, Chief Development Officer D-Wave appeared first on AiThority.

]]>
AiThority Interview with Trevor Lanting, Chief Development Officer D-Wave

Trevor Lanting, Chief Development Officer D-Wave shares the distinct advantages of annealing and gate-model quantum computing for various industries, emphasizing their roles in optimization, materials science, and AI. In this interview he talks about the potential for quantum computing to alleviate the computing demands in AI and ML across multiple sectors.

———–

Please share your journey to becoming Chief Development Officer (CDO) at D-Wave and what inspired your passion for quantum computing.

I have a background in physics, and I have always been interested in technology development. I am trained as an experimental physicist and my graduate work involved building superconducting instrumentation for microwave astronomy.

Through that training, I realized my passion really centered on developing technology and building tools. When D-Wave was recruiting for an experimental physicist in 2008, I jumped at the chance to join the team. Over the last 15 years, I have been involved with many aspects of our technology development, contributing directly to our annealing quantum computing development, our performance research program, and helping lead our software and algorithms teams. I was involved with work that was instrumental in demonstrating quantum entanglement in the fabric of our annealing processors, a major step in validating our technology approach.

Also Listen: AI Inspired Series by AiThority.com: Featuring Bradley Jenkins, Intel’s EMEA lead for AI PC & ISV strategies

Several months ago, I stepped into a leadership role helping direct our overall research and product development efforts across software and hardware systems. I am incredibly excited about quantum computing. We are building technology that harnesses quantum mechanics to produce fundamentally new computational tools. As the technology rapidly matures, we are seeing a growing set of use cases that span from the acceleration of scientific discovery to optimization of complex business processes, and emerging machine learning applications.

Can you talk about the primary differences between D-Wave’s annealing and gate-model quantum computers, and how do these technologies benefit industries like AI, logistics, and materials sciences?

Annealing and gate-model quantum computing are two of the leading approaches to building practical large-scale quantum computing technology. These approaches offer distinct and complementary advantages for different use cases and applications, and we are developing both technology platforms.  

Annealing systems are uniquely suited for solving optimization problems. These problems, like tour scheduling, resource scheduling, and cargo loading, occur across many industries, like supply-chain logistics and manufacturing, and solving these problems leads to more efficient operations and direct cost savings.

 For materials sciences, gate-model systems have the potential to simulate the behavior of novel molecules and interactions between these molecules, promising to accelerate material discovery and drug design.

 For AI, annealing quantum computing can enhance machine learning algorithms, particularly in feature selection, model optimization, and providing rich quantum distributions that can be directly harnessed in generative AI architectures.  

Annealing quantum computing does have several advantages over current gate model systems: annealing protocols do not require significant pre-processing overheads associated with many gate-based protocols; annealing processor controls are continuously applied making the processors more resilient to errors and noise; and annealing processors are scaling to enterprise-level problem sizes more quickly. These characteristics make annealing quantum computing ideal for addressing today’s real-world challenges across industries.

How do you see the integration of quantum computing with AI and machine learning evolving, and what challenges and opportunities do you foresee?

It’s becoming apparent that the broader AI industry is facing a severe computing crunch. The amount of compute and the related energy costs needed to keep up with an increasing set of use cases is rapidly escalating. The industry should recognize that quantum computing technology might offer real opportunities to allow the industry to meet the growing demand for larger, more performant, and more energy efficient AI and ML architectures and workloads.

Also Read: Humanoid Robots And Their Potential Impact On the Future of Work

 At the same time, we are in the early part of exploring how best to harness the power of quantum computing for AI. There is currently work in development here at D-Wave on using quantum distributions for designing modern generative AI architectures. This is an emerging field that involves directly using quantum processing unit samples that are not easily generated by classical computers, all of which could potentially improve how generative AI models are built.

 Customers such as TRIUMF, a Canadian physics lab, Honda Innovation Lab and Tohoku University, are already exploring D-Wave technology to address a variety of AI/ML workloads including pre-training optimization and more accurate and efficient model training.

 D-Wave has introduced a hybrid-quantum approach to optimizing feature selection in AI/ML model training and prediction. This approach is designed to help improve models’ accuracy by employing quantum systems to select the most representative dataset characteristics. Our partnership with Zapata AI continues to explore how the combination of quantum computing and generative AI could accelerate the development of new pharmaceuticals.

What are your predictions for the future of quantum computing, particularly in scalability, practical applications, and mainstream adoption across industries?

 For us at D-Wave, the future of quantum computing is firmly anchored in its practical applications. We’re already witnessing real-world impact across various industries, solving problems to directly improve people’s daily lives. Examples include quantum-optimized routes for grocery deliveries and more efficient supply-chain management. Overall, I expect quantum solutions to impact business operations by improving efficiencies in supply chain management, financial modeling, and resource allocation.

 Scalability is an important factor: the underlying quantum computing systems need to be designed for scalability from the beginning, and this a key reason why we focused our initial technology development effort on superconducting quantum annealing systems. As quantum computers become more powerful, growing in qubit count and quality, their ability to tackle larger and more complex problems will also increase.

 I believe quantum computing will play a meaningful role in drug discovery, accelerating the development of new medications and materials, and will be more broadly adopted across industries that face optimization challenges.

How do you think AI advancements will influence the evolution of quantum computing hardware and software solutions?

 We’ve talked about AI advancements making things faster and more scalable, and of course, this will allow for new discoveries. AI tools could also make quantum computing more accessible by automating some of the complex processes involved in quantum computations, problem formulation, solver parameter selection, and adding more user-friendly interfaces. This aligns with our goal at D-Wave, which is to make quantum computing a practical tool for solving real-world problems across various industries.

Could you share your thoughts on where you see AI, machine learning, and other smart technologies heading beyond 2024?

 In the future, I think we will see quantum-enhanced AI models outperform purely classical AI in many domains. Like any new emerging general-purpose technology, if we can put quantum computing, AI, and machine learning technologies in the hands of a broad and diverse set of users as fast as possible, unexpected and powerful use cases will quickly emerge and we will see these technologies embedded into our daily lives. And as domain experts in a wide range of fields adopt these powerful tools, progress on drug design, materials innovation and simulation, business optimization, and scientific discovery will accelerate. 

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Trevor Lanting is a senior R&D executive with over 15 years of experience in technology development. He currently leads D-Wave’s product development and research organization, overseeing teams responsible for software, systems, cloud services, and performance research. Trevor has played a key role in driving the development and deployment of five generations of annealing quantum computing systems. He is passionate about aligning fundamental technology development with customer value and is dedicated to rapidly bringing the cutting-edge computing technology developed by his team to market.

D-Wave is the leader in the development and delivery of quantum computing systems, software and services and is the world’s first commercial supplier of quantum computers and the only company developing both annealing quantum computers and gate-model quantum computers. Our mission is to unlock the power of quantum computing for the world. We do this by delivering customer value with practical quantum applications for problems as diverse as logistics, artificial intelligence, materials sciences, drug discovery, scheduling, cybersecurity, fault detection, and financial modeling.

The post AiThority Interview with Trevor Lanting, Chief Development Officer D-Wave appeared first on AiThority.

]]>
rabbit r1, the AI-native Pocket Companion, Expands Availability Across the UK and EU https://aithority.com/machine-learning/rabbit-r1-the-ai-native-pocket-companion-expands-availability-across-the-uk-and-eu/ Tue, 13 Aug 2024 11:53:12 +0000 https://aithority.com/?p=575138 rabbit r1, the AI-native Pocket Companion, Expands Availability Across the UK and EU

New r1 orders globally are now shipping within three business days Today, AI startup rabbit inc. announced that r1, a first-of-its-kind AI-native device that puts easy access to the industry’s most advanced consumer AI models in your pocket, is now available across the European Union and the United Kingdom. In addition, rabbit announced that a****** […]

The post rabbit r1, the AI-native Pocket Companion, Expands Availability Across the UK and EU appeared first on AiThority.

]]>
rabbit r1, the AI-native Pocket Companion, Expands Availability Across the UK and EU

New r1 orders globally are now shipping within three business days

Today, AI startup rabbit inc. announced that r1, a first-of-its-kind AI-native device that puts easy access to the industry’s most advanced consumer AI models in your pocket, is now available across the European Union and the United Kingdom. In addition, rabbit announced that a****** r1 orders globally will ship within three business days.

r1 is rabbit’s first hardware product and has made waves in the consumer electronics industry as the best-selling and most-used native AI device to-date, with more than 100,000 devices sold. One-third of r1 devices sold to-date have been purchased in Europe. Customers have discovered a wide variety of helpful use cases to improve their daily lives with AI, from deciphering complicated parking and road signs, and getting important gardening tips using the AI-powered rabbit eye camera, to translating conversations in real-time and generating healthy recipe ideas with access to leading large language models (LLMs).

Also Read: AiThority Interview with Seema Verma, EVP and GM, Oracle Health and Life Sciences

r1, rabbit OS and rabbithole – a personalized AI agent experience from end to end

Running on a personalized operating system, r1 is the only consumer AI device that gives users quick and affordable access to the leading AI models in one device, including LLMs from ChatGPT, Perplexity, and Anthropic. Combining custom-built software with sleek hardware designed in collaboration with renowned design firm Teenage Engineering, r1 is an on-the-go curiosity catcher, a journal you write with your voice, an encyclopedia at your fingertips, a recorder that summarizes your every thought, a pocket translator, and a reason to bring friends and family together. Each user can manage their personalized history and supported online services in rabbithole, the accompanying secure cloud hub and “brain” of the user’s personal AI assistant. Take a look at our use cases to learn more about how r1 offers a new and useful AI-native experience.

Commitment to constant improvement for a better user experience

Since launching r1, rabbit has demonstrated its community-focused approach to ensure a fast feedback loop between its consumers and engineers, continuously and rapidly improving and adding more value to r1 through regular software updates based on user feedback. The new “beta rabbit” feature, introduced in July, enables r1 to provide significantly smarter answers to more complex queries. A number of other key updates have also made r1 more useful and delightful, including global memory recall, which personalizes responses in relation to a user’s rabbithole journal content; Wolfram|Alpha integration, which provides users with increased accuracy in computational queries related to mathematics, science, technology, society, and culture; and Magic Camera, which turns photos taken by r1 into creative AI-generated images. While there are currently some regional limitations in Europe, including availability of certain connected apps and additional operating languages, the product is vastly more capable than it was at launch, with new improvements and features added through regular over-the-air (OTA) and cloud updates. This cadence will also apply to Europe as the team continues to listen to feedback and rapidly apply it to the user experience.

“r1 was designed from day one to be a global device, and we want to deliver it to every corner of the world,” said Jesse Lyu, Founder and CEO of rabbit. “We’re very encouraged to see the existing appetite for r1 in Europe already as we pave the way in this new category of AI-native products.”

Built around security from day 1 and constantly becoming stronger

rabbit designed the architecture of rabbit OS with security and privacy at the core. “The most important thing we do at rabbit is earn and protect customer trust. This is something that everyone, not just the security team, is charged with when they join the company,” said Matt Domko, Head of Security at rabbit. “We’re actively taking steps to consistently improve our security program on a daily basis.”

Also Read: AiThority Interview with Kunal Purohit, President – Next Gen Services, Tech Mahindra

To keep up with the fast pace of rabbit’s software updates and growing set of consumer use cases, rabbit regularly works with credible third-party security partners to test the resilience of its system and ensure consumers’ privacy and information security are effectively protected. Learn more about the company’s recent penetration test in this blog post.

r1 only listens when the user presses the physical push-to-talk button to interact with the device, and its rotating camera defaults to a position that physically blocks the lens unless the user explicitly requests it. rabbit also implemented a vulnerability disclosure program (VDP) in May to encourage researchers, developers, and the general public to submit vulnerabilities in a responsible manner.

rabbit r1 is available today across the entire European Union (excluding Malta) and can be purchased at www.rabbit.tech. It costs $199 with no subscription necessary; please refer to the website for current local prices in the UK and Europe. New orders globally will also be processed and shipped within three business days.

Also Read: AiThority Interview with Yair Amsterdam, CEO of Verbit 

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post rabbit r1, the AI-native Pocket Companion, Expands Availability Across the UK and EU appeared first on AiThority.

]]>