generative AI Archives - AiThority https://aithority.com/category/machine-learning/generative-ai/ Artificial Intelligence | News | Insights | AiThority Tue, 13 Aug 2024 06:43:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://aithority.com/wp-content/uploads/2023/09/cropped-0-2951_aithority-logo-hd-png-download-removebg-preview-32x32.png generative AI Archives - AiThority https://aithority.com/category/machine-learning/generative-ai/ 32 32 AI Coding Tools: Are They a Threat or a Boon for Coders? https://aithority.com/machine-learning/ai-coding-tools-are-they-a-threat-or-a-boon-for-coders/ Tue, 13 Aug 2024 06:43:31 +0000 https://aithority.com/?p=574927 AI Coding Tools: Are They a Threat or a Boon for Coders?

Artificial intelligence is revolutionizing software development at an unprecedented pace. AI coding tools are unlocking new possibilities, enabling developers to ideate, create, and iterate with remarkable speed. This rapid advancement raises pertinent questions: Can AI write code? Can AI coding tools assist in learning to code? More crucially, does AI pose a threat to the […]

The post AI Coding Tools: Are They a Threat or a Boon for Coders? appeared first on AiThority.

]]>
AI Coding Tools: Are They a Threat or a Boon for Coders?

Artificial intelligence is revolutionizing software development at an unprecedented pace. AI coding tools are unlocking new possibilities, enabling developers to ideate, create, and iterate with remarkable speed. This rapid advancement raises pertinent questions: Can AI write code? Can AI coding tools assist in learning to code? More crucially, does AI pose a threat to the future of software engineering by potentially replacing human programmers?

Contrary to these concerns, the future of software engineering remains secure. AI tools are not job-destroyers but valuable additions to a programmer’s toolkit. They enhance efficiency and creativity without rendering human expertise obsolete. As we explore the capabilities and implications of AI in coding, it becomes evident that these tools are more boon than threat, augmenting rather than replacing the role of the software engineer.

AI is now embedded in many activities today, from streaming television entertainment to finding products online. In coding, AI automates tedious processes and assists developers in tackling complex troubleshooting problems.

Developers use AI for various tasks, from marketing integration tools to customer-facing software applications. By 2023, 92% of U.S. coders reported using AI tools, and 70% claimed these tools improved their work (GitHub). The widespread adoption of AI coding tools indicates a significant shift in the industry.

Also Read: Conversational AI Is Here to Stay, but Don’t Overlook the Risks Before Basking in the Rewards

What are AI Coding Assistants?

AI coding assistants are tools powered by machine learning algorithms designed to enhance the coding process. They provide developers with intelligent code completion, generate code snippets, and automate repetitive tasks. By offering context-aware suggestions and autocompletion, these assistants significantly speed up coding and reduce developers’ cognitive load, making coding faster and more efficient.

However, their capabilities extend beyond basic autocompletion. Leading AI coding tools offer features such as:

  • Text-to-code generation from natural language descriptions
  • Automatic bug detection and fix suggestions
  • Code refactoring recommendations
  • Language translation (converting code from one programming language to another)
  • Real-time code explanations and documentation generation

Current Capabilities of AI in Code Writing

As of now, AI offers several advanced capabilities in coding:

  1. Code Autocompletion
    AI-driven code editors utilize machine learning algorithms to analyze coding patterns and suggest code snippets. This feature enhances coding efficiency and productivity and assists developers in learning best practices and conventions.
  2. Automated Code Generation
    AI can generate code snippets or entire functions based on user prompts. This functionality accelerates development, particularly for repetitive or boilerplate code.
  3. Code Refactoring
    AI tools can evaluate code and recommend improvements to enhance readability, performance, or compliance with coding standards. This aids in maintaining clean and efficient codebases.
  4. Bug Detection and Fixes
    AI-powered tools can identify and correct bugs in code, detecting potential issues before runtime. This helps developers address and resolve bugs early in the development cycle.

Functionality of AI Code Assistants

AI code assistants initially relied on Natural Language Processing (NLP) techniques. These methods enabled the assistants to process extensive code data, comprehend coding patterns, and generate relevant suggestions or insights for developers.

Recent advancements in generative AI have enhanced these tools significantly. Modern code assistants now incorporate large language models (LLMs) such as GPT-3.5 and GPT-4. These models can produce human-like text and code based on contextual input. They generate syntactically accurate, contextually relevant code segments and interpret natural language prompts, offering increased convenience and utility for developers.

AI code assistants are trained on various datasets. Some use extensive, publicly available datasets, such as those from GitHub repositories, while others are trained on specific datasets related to particular organizations. The training process for LLM-based code assistants involves two main steps:

  • Pre-training: The model learns the structure of natural language and code from a broad dataset.
  • Fine-tuning: The model is further trained on a specialized dataset to enhance its performance for specific tasks.

Also Read: AI and IoT in Telecommunications: A Perfect Synergy

Will AI Replace Programmers?

AI will not replace programmers but will enhance their ability to write code. AI-powered coding assistants such as ChatGPT, GitHub CoPilot, and OpenAI Codex are already supporting developers by generating high-quality code snippets, identifying issues, and suggesting improvements. These tools expedite the coding process, though AI will take time to create production-ready code beyond a few lines.

Here is how AI will impact software development in the near future:

Advancement of Generative AI

Generative AI will improve in automating tasks and assisting developers in exploring options. It will help optimize coding for scenarios beyond AI’s current understanding.

AI as a Coding Partner

AI will increasingly serve as a coding partner, aiding developers in writing software. This collaboration is already underway and will expand as AI becomes capable of handling more complex coding tasks. AI tools will be integrated into IDEs, performing coding tasks based on prompts while developers review the output. This partnership will accelerate certain aspects of the software development lifecycle (SDLC), allowing developers to focus on more intricate tasks.

The Continued Importance of Programmers

Programmers will remain essential, as their value lies in determining what to build rather than just how to build it. AI will take time to understand the business value of features and prioritize development accordingly. Human programmers will continue to play a crucial role in interpreting and applying business needs.

Benefits and Risks associated with AI Coding 

Benefits:

  1. Accelerated Development Cycles
    AI coding tools enhance the speed of writing code, leading to quicker project turnovers. By automating code generation, these tools enable teams to meet tight deadlines and deliver projects faster. According to McKinsey, generative AI can make coding tasks up to twice as fast.
  2. Faster Time to Market and Innovation
    AI code generation shortens the software development lifecycle, giving organizations a competitive edge by reducing time to market. These tools streamline traditional coding processes, allowing products and features to reach end-users rapidly and capitalize on market trends.
  3. Enhanced Developer Productivity
    AI code generators boost developer efficiency by predicting next steps, suggesting relevant snippets, and auto-generating code blocks. Automating repetitive tasks allows developers to focus on complex coding aspects, increasing productivity. A Stack Overflow survey shows a 33% increase in productivity with AI-assisted tools.
  4. Democratization of Coding
    AI code generators make coding more accessible to novices by lowering entry barriers. Even those with minimal coding experience can use these tools to produce functional code, fostering inclusivity within the development community.

Risks:

  1. Code Quality Concerns
    AI-generated code can vary in quality, potentially harboring issues that lead to bugs or security vulnerabilities. Developers must ensure that AI-generated code meets project standards and is reliable. UC Davis reports that AI-generated code may contain errors due to lack of real-time testing.
  2. Overreliance and Skill Erosion
    Excessive dependence on AI-generated code may diminish developers’ hands-on skills. It is important for developers to balance AI tool usage with active engagement in the coding process to prevent skill atrophy and ensure understanding of coding fundamentals.
  3. Security Implications
    AI code generators might inadvertently introduce security vulnerabilities. Developers should rigorously review and validate generated code to adhere to security best practices. A Stanford University study highlights instances of insecure code generated by AI tools.
  4. Understanding Limitations
    AI models have limitations in grasping complex business logic or domain-specific requirements. Developers need to recognize these limitations and intervene to ensure the code aligns with the project’s unique needs, such as compliance with data security regulations in sensitive applications.

Standout AI Coding Tools in 2024

GitHub Copilot

Tabnine

Amazon CodeWhisperer

Codiga

Sourcegraph

Codium Ltd.

AskCodi

CodeWP

OpenAI Codex

Also Read: AiThority Interview with Seema Verma, EVP and GM, Oracle Health and Life Sciences

Future of AI in Coding 

As organizations embark on the journey of AI code generation, the focus must be on leveraging its advantages while effectively managing associated risks. Understanding and responsibly navigating these elements will enable the creation of innovative, efficient, and secure software solutions.

Thoughtful implementation, ongoing learning, and a commitment to code quality are crucial in this evolving landscape. AI tools are revolutionizing secure coding by providing developers with advanced tools for identifying and correcting issues rapidly. As AI integrates more deeply into coding practices, it will enhance security measures and support developers in producing robust, secure code.

By adopting AI-based tools and incorporating secure coding practices, developers and organizations can address diverse digital security threats and fortify code protection. The future of secure coding appears promising, with AI playing a pivotal role in advancing security and efficiency in software development.

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post AI Coding Tools: Are They a Threat or a Boon for Coders? appeared first on AiThority.

]]>
Conversational AI Is Here to Stay, but Don’t Overlook the Risks Before Basking in the Rewards https://aithority.com/machine-learning/conversational-ai-is-here-to-stay-but-dont-overlook-the-risks-before-basking-in-the-rewards/ Thu, 08 Aug 2024 07:47:09 +0000 https://aithority.com/?p=574539 Conversational AI Is Here to Stay, but Don’t Overlook the Risks Before Basking in the Rewards

We’re at a point where organizations should not bypass implementing and using AI in some capacity in their operations. The benefits of the technology are too great to overlook in how it can augment employees in their work and solve business use cases in more efficient ways. Conversational AI chatbots can help simplify and streamline […]

The post Conversational AI Is Here to Stay, but Don’t Overlook the Risks Before Basking in the Rewards appeared first on AiThority.

]]>
Conversational AI Is Here to Stay, but Don’t Overlook the Risks Before Basking in the Rewards

We’re at a point where organizations should not bypass implementing and using AI in some capacity in their operations. The benefits of the technology are too great to overlook in how it can augment employees in their work and solve business use cases in more efficient ways.

Conversational AI chatbots can help simplify and streamline entire processes through the automation of day-to-day activity. Its benefits are unmatched – whether you’re an HR professional who uses this technology to optimize onboarding and recruitment with ease or a call center agent who needs to tap into the right information for customers in seconds.

But this is not a technology that you can implement and then walk away from. To maximize its value and benefits, it requires continuous monitoring and iteration. On top of that, as with any new technology, the risks involved need to be fully realized. Accuracy and security are at the top of the list and if left unchecked, an organization can expose itself to negative reputational or even financial repercussions.

With stakes that high and the presence of conversational AI chatbots increasing by the day, it’s critical for organizations to overcome these hurdles from the start so that they can fully reap the rewards that this technology will bring.

Also Read: AiThority Interview with Dr. Arun Gururajan, Vice President, Research & Data Science, NetApp

Hallucination and reliability issues

Accuracy remains a problem as generative AI continues to proliferate in the market. According to a report from Aporia on AI models, 89% of machine learning engineers say their LLMs exhibit signs of hallucinations, for example.

It’s an issue that’s even made headlines. Google’s new AI Overview feature was found recently to recommend that users add glue to their pizza when baking so that they can get the cheese to stick better. Google claimed the issue was due to an information gap and a misinterpretation of language when searching for results.

For all the incredible capabilities and thoughtful responses that AI can provide, hiccups can and will happen, and the ultimate cause of this problem is improper AI training. There’s much that can be done to fix this, and it starts with training data. The varying levels of training data and algorithms used from model to model is the underlying issue here. Thus, ensuring that AI models are provided with the highest quality data is the key to the chatbot operating as error and bias-free as possible.

Direct human feedback is another solution to this, and it’s also where increased voice functionality is a benefit. With all the added contextual information that voice provides, it’ll become a crucial part of AI model training moving forward. Business leaders expect the tools their organizations use to operate to be accurate, otherwise they risk damage to their reputation and customer base. They can’t afford to throw their weight behind an AI that is putting out inaccurate information alongside all the factual insights it’s generating.

Having to frequently look over the chatbot’s output defeats the purpose of the rapid and informative information it’s capable of providing. If this problem isn’t fixed, it’ll cause serious problems for businesses when it comes to trust from its customers.

Also Read: More than 500 AI Models Run Optimized on Intel Core Ultra Processors

Data privacy and security

While hallucinations and accuracy remain issues with AI chatbots, recent data shows that it’s not the only problem that’s keeping business leaders up at night. According to a new study from Alteryx, while over three-quarters of organizations said there was business value in using generative AI, 80% listed data privacy and security concerns as their biggest concerns.

With AI rapidly becoming integrated into our daily lives, it’s important that organizations get data security right or risk losing external confidence in their solutions. This underscores the fundamental shift that’s emerging as technology is ingrained in some capacity in every business’s operations and infrastructure.

Security is often the first question that an organization will ask a prospective AI provider about before integrating the technology into their processes. They’ll be particularly interested to understand how the model retains data, if the model is used to train itself, and about data leak risks. This is where the benefits of utilizing a small language model come in handy, where it can be integrated and deployed into an organization’s on-premise environment, so there’s no outside access to the internet required to utilize the full capabilities of an AI chatbot.

Ultimately, any data sharing practices and storage need to comply with stringent privacy regulations, as well as regular security monitoring and data encryption. Failure to meet strict standards can be detrimental as any data breaches can harm a businesses’ public standing and potentially trigger legal or financial penalties.

Also Read: Cybersec Specialist Gareth Russell Joins Commvault as Field CTO, Security for APAC

AI is the key to better productivity

Generative AI use has boomed, and business leaders are seeing the results, as 77% reported running successful AI co-pilots within their organizations. This shows that many organizations are placing their trust in this technology, and are now being rewarded with deployments that are paying dividends for their business solutions.

But it’s important to remember here that AI models benefit from a human touch and being iterated continuously, otherwise if not updated or left unchecked, it can cause serious issues. When this proactivity is fully acted upon, AI will continue to be a tool that enhances processes and helps us in so many different ways. It’s been remarkable to see its use cases and integrations grow across any sector imaginable.

For example, in healthcare we’ve seen conversational AI chatbots transform the digital experience for patients by acting as an efficient virtual assistant for all of their needs. From quickly analyzing user responses to prompts regarding symptoms and risk factors to improving the scheduling process and helping the patients make the right appointment, this technology can improve satisfaction on both sides.

In an HR setting, a conversational AI chatbot can streamline new candidate onboarding and employee retainment. By optimizing the common tasks that make up these processes, it can help HR professionals better identify the right candidates for open positions. When an HR team isn’t swarmed with applications to analyze, they’re freed up to develop stronger relationships with the most promising candidates and keep current employees satisfied.

These examples show how speed, efficiency, and personalization can all be amplified when AI technology is integrated into business solutions. Simply put, the experience for internal and external users is made better and organizations can directly see how technology like a conversational AI chatbot can help them accomplish their objectives in a more productive fashion.

Conversational AI is such a fascinating technology because of how it’s being used to solve business needs that have existed for decades in new and exciting ways. The value it’s creating for organizations in only the last several years has been great to see, and there’s no signs of slowing down. It’s too important of a technology to disregard, and it’s time we all move forward with it.

But we also can’t sit back and let this technology fully take off without taking into account the risks around accuracy and security. Conversational AI chatbots are here to stay, and they’ll only keep improving as they become more skilled at processing requests quicker and even more personalized, like a true personal assistant for every use case that’s needed.

It’s important that all stakeholders truly get this right as this technology continues to expand exponentially so that they can have the greater functionality and peace of mind that it can bring.

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post Conversational AI Is Here to Stay, but Don’t Overlook the Risks Before Basking in the Rewards appeared first on AiThority.

]]>
Veza Introduces Access AI to Deliver Generative AI-Powered Identity Security to the Modern Enterprise https://aithority.com/machine-learning/generative-ai/veza-introduces-access-ai-to-deliver-generative-ai-powered-identity-security-to-the-modern-enterprise/ Wed, 07 Aug 2024 05:50:26 +0000 https://aithority.com/?p=574933 Veza Introduces Access AI to Deliver Generative AI-Powered Identity Security to the Modern Enterprise

J.P. Morgan Invests in Veza Veza, the identity security company, announced the launch of Access AI, a generative AI-powered solution to maintain the principle of least privilege at enterprise scale. With Access AI, security and identity teams can now use an AI-powered chat-like interface to understand who can take what action on data, prioritize risky […]

The post Veza Introduces Access AI to Deliver Generative AI-Powered Identity Security to the Modern Enterprise appeared first on AiThority.

]]>
Veza Introduces Access AI to Deliver Generative AI-Powered Identity Security to the Modern Enterprise

J.P. Morgan Invests in Veza

Veza, the identity security company, announced the launch of Access AI, a generative AI-powered solution to maintain the principle of least privilege at enterprise scale. With Access AI, security and identity teams can now use an AI-powered chat-like interface to understand who can take what action on data, prioritize risky or unnecessary access, and remove risky access quickly for both human and machine identities. By bringing the power of generative AI to identity security in the enterprise, Veza makes it possible to prevent, detect, and respond to identity-related issues before they turn into disruptive incidents like breaches or ransomware.

Also Listen: AI Inspired Series by AiThority.com: Featuring Bradley Jenkins, Intel’s EMEA lead for AI PC & ISV strategies

“The broad adoption of cloud services, digital supply chains and remote access by employees working from anywhere has eroded the value of legacy security controls at the perimeter of the corporate network, positioning identity as the primary control plane for cybersecurity.”

Identity security has become a top priority for companies that have embraced cloud services, SaaS applications, and AI. According to a report from the Identity Defined Security Alliance (IDSA), 90% of organizations experienced an identity-related incident in the past year, and 84% suffered a direct business impact as a result. To combat this growing problem, companies are investing in new business processes like Access Entitlements Management, Identity Security Posture Management (ISPM), and Identity Threat Detection and Response (ITDR).

Similarly, according to Gartner, “The broad adoption of cloud services, digital supply chains and remote access by employees working from anywhere has eroded the value of legacy security controls at the perimeter of the corporate network, positioning identity as the primary control plane for cybersecurity.”1

Access AI

With this announcement, Access AI is available across the Veza Access Platform. It uses machine learning and generative AI to surface and contextualize recommendations for fixing identity-based threats. Teams across identity, security engineering, application security, and compliance use Access AI to investigate who has access, how they got it, and whether it should be revoked. Like all Veza products, Access AI understands both human identities and non-human identities, such as service accounts.

Access AI can:

  • Answer natural-language questions about entitlements and association to identity
  • Understand the access of non-human identities and machine identities
  • Recommend roles that follow the principle of least privilege
  • Surface dormant or excessive permissions to revoke
  • Create ITSM tickets (such as ServiceNow) with instructions for remediation
  • Recommend actions during user access reviews and recertifications

“Two years ago we changed the game in identity access with our Access Graph, and now we are doing it again with Access AI,” said Tarun Thakur, co-founder and CEO, Veza. “Veza is the first company to apply AI to manage and secure entitlements across SaaS systems, cloud data systems, identity systems, and infrastructure services. Customers tell us this is the year of identity. They want access intelligence to hunt for threats automatically across tens of thousands of identities and entitlements within hundreds of systems, which is critical with the recent explosion of non-human identities. To solve this requires speed and intelligence that is only possible with AI.”

“To operate with least privilege, companies must be focused on their identity posture. With the modern enterprise moving away from standing access, success now depends on having the appropriate tools and automated solutions,” said Matthew Sullivan, Infrastructure Security Team Lead at Instacart. “Nearly every discovery made by Veza’s AI has prompted an immediate response from our team. With hundreds of thousands of entitlements to oversee, leveraging AI-driven automation has been essential to staying proactive.”

J.P. Morgan Investment

This launch comes on the heels of an investment from J.P. Morgan, a leading global financial services firm, which brings the company’s total funding to $132 million. This investment will be used to accelerate product innovation as Veza continues to redefine identity security and organizations across the globe begin their identity security transformation.

New Capabilities

As Veza continues to modernize the identity market with its industry-first Access Graph and Access Intelligence, it has also unveiled additions to the Veza Access Platform in conjunction with the release of Access AI.

Enhanced security for non-human identities (NHIs)

  • NHI Insights and NHI Access Security, an inventory of all NHIs like Azure AD service principals and AWS IAM service accounts.
  • Support for new NHI entities: access keys and secrets.
  • Ability to monitor key rotation to reduce the risk of stale credentials.
  • Ability to determine access of keys, tokens, certificates.
  • Custom rules and manual overrides for NHI identification to aid in searching, tracking, and alerting.
  • Support for managing NHI owners to manage timely key rotation, workload uptime, and service account governance.

Also Read: Humanoid Robots And Their Potential Impact On the Future of Work

Lifecycle management for next-gen IGA

  • Role recommendations for access requests based on the principle of least privilege, powered by machine-learning.
  • 10 new targets for Veza Lifecycle Management. Support for provisioning and deprovisioning to Active Directory (AD), Entra ID, Okta, Azure, Salesforce, Microsoft Exchange, Exchange Online, SAP, Google Workspace, and Snowflake. Veza Lifecycle Management goes beyond SCIM protocols to advance the state of provisioning that covers hierarchical groups and roles with a set of automated CRUD aware policies.
  • Support for the Veza Open Authorization API (OAA) which allows quick support for provisioning to new applications, including custom applications.

Activity monitoring for ITDR, Security Engineering, and Security Operations

  • New ability to monitor activity in Okta, collecting and summarizing log data to know who accessed what resources, including last-used date.
  • Calculate the Over-Privileged Access Scores (OPAS) for Okta to prioritize your most over-privileged roles and users.
  • Monitoring for access activity in Snowflake and AWS IAM.

Access intelligence for Cloud PAM, privilege threat hunting, privileged access assurance

  • Out-of-the-box role mining insights and analytics for Snowflake.
  • 20+ out-of-the-box dashboards by persona, risk type (privilege drift, insider threat, cloud entitlements, ISPM, NHI, access creep), and systems (SaaS, data systems, infrastructure).
  • Veza Query Language (VQL) as API endpoints to query, sort, filter, and perform complex compound queries for use cases such as segregation of duties and privilege threat hunting.
  • New Risk Profile based on privilege threat hunting framework that leverages the power of Veza Access Graph, identity risk scores, over-permission access scores, and Veza Query Language.

Don’t miss this out: More than 500 AI Models Run Optimized on Intel Core Ultra Processors

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post Veza Introduces Access AI to Deliver Generative AI-Powered Identity Security to the Modern Enterprise appeared first on AiThority.

]]>
To Get the Most Out of AI, You Need to ‘Boss It Around’ https://aithority.com/machine-learning/to-get-the-most-out-of-ai-you-need-to-boss-it-around/ Tue, 06 Aug 2024 07:04:34 +0000 https://aithority.com/?p=574838 To Get the Most Out of AI, You Need to ‘Boss It Around’

In many areas, artificial intelligence is making routine what used to be impossible – and that’s perhaps especially true in business. When users first witness what AI is capable of, they are often in awe – and when they realize they can utilize this technology to accomplish far more than they were capable of previously, […]

The post To Get the Most Out of AI, You Need to ‘Boss It Around’ appeared first on AiThority.

]]>
To Get the Most Out of AI, You Need to ‘Boss It Around’

In many areas, artificial intelligence is making routine what used to be impossible – and that’s perhaps especially true in business. When users first witness what AI is capable of, they are often in awe – and when they realize they can utilize this technology to accomplish far more than they were capable of previously, they sometimes become overly dependent on it. Bad idea; despite its advanced capabilities, AI still has some serious flaws.

In order to be truly effective, AI needs a human “boss.” In the human-AI partnership, it’s the human who needs to come first – who needs to “lead” the AI system into providing results that make sense, by reviewing and applying experience and logic to the results provided by these amazing tools.

For example, AI can sometimes display “tunnel vision,” unaware of “big picture” policy issues and long-term corporate goals. It’s sort of like a star employee who is very good at their job – but lacks knowledge or awareness of a company’s long-term strategic goals. What you want from that employee is their productivity in their specific area – not an overhaul of the company based on their limited knowledge. Users need to treat AI systems as that “talented employee” – keeping in mind that they need to be in control of the overall project strategy. Just like a talented employee needs mentoring and guidance, so too AI. With that human guidance, companies can extract the greatest value from their new “star performers;” without it, the company could find itself in big trouble. And, when introducing new AI tools, which are increasingly employed to do jobs and tasks, they should be treated just like a new employee. This means that they need a boss to mentor and guide them, and supervise and check their work, at least in the beginning. And with AI, we are still at the beginning; and some micromanagement is often needed.

Also Read: Humanoid Robots And Their Potential Impact On the Future of Work

There’s no question that AI tools have been a major boon to productivity – boosting output by nearly 500%, according to studies – and enabling businesses to become more agile, competitive, and efficient. And now that we’ve gotten used to employing them, doing business without these tools is inconceivable. As time goes on, AI tools will improve even more – and with them, the commensurate benefits in efficiency and profitability businesses will be able to extract from them. Indeed, for better or worse, business research today is often just posing a question to an AI tool.

It is true that humans are learning more about the nuances of this process all the time. For example, it is increasingly understood that by asking the right question, and using the right prompt, most AI tools will give you far better results than you could have gotten manually in such a short period of time. Automated AI systems will parse data in a matter of minutes, if not seconds – far faster than any human could hope to – and present it in a logical manner that is easily understood and comprehended. The temptation to take those results and run with them is, understandably, very strong.

But those automated results need to be understood, reviewed, and checked for accuracy. As smart as it is, AI sometimes comes up short. AI tools sometimes produce inaccurate, incorrect, or even illogical results – and without a human supervising the data generation process, companies could find themselves facing fines, lawsuits, and damaged reputations. Air Canada furnishes a good example of what can happen when AI runs unchecked: The company was recently ordered to pay damages to a passenger who paid full fare for tickets to a grandparent’s funeral, based on incorrect information furnished by an AI-powered chatbot. The company’s defense was that it could not be held responsible for incorrect information furnished by the chatbot – an argument rejected by the court, which ordered Air Canada to refund the overcharge. Had a human reviewed the information offered by the chatbot, the airline could have avoided the expense – and embarrassment – that ensued.

But it’s not just about company coffers: Overreliance on automatically generated AI data can damage or even derail a career. In order to make an effective presentation – whether in person, in a presentation, or in a Zoom meeting – the presenter needs to be intimately familiar with the information they are presenting. This is difficult if the presenter is simply using information produced by AI. For example,  if the automated data is incorrect, they are likely to be called out on it – with the audience or stakeholder demanding to know the source of data, the reasoning behind a statement, or the logic of an argument. And the presenter will likely not be able to answer in an effective and competent manner. A similar situation could arise even if the data is correct—those listening to the presentation could very well start asking follow-up questions, or want to know the source or reasoning behind it.

Also Read: The Ethical Dilemmas of Generative AI

In order to effectively – and safely – utilize their automated results, AI users need to engage in some “active learning,” where they evaluate the results and apply knowledge, facts, and experience to the review process. If the user follows that path, they could ask themselves the same questions likely to be posed to them – giving them time to find the answers they need. But ignoring that review could put them in jeopardy – making them look like fools when presenting information that on the surface appears to be correct, but might be riddled with flaws and or other factors that lead to questions.

It’s a fact that more than half of Americans are concerned about AI’s effects on their lives. Among other things, some fear losing their jobs to AI, some fear AI systems will compromise their privacy, some fear politicization of results. And it’s understandable why people fear AI: It’s been presented in the media as a monolithic, independent “monster” that is going to change life fundamentally, turning us all into its servants, if not destroy us. But that’s not the case: AI is just the latest in advanced tools that we can use to make business, and life, easier and better. We don’t work for AI – it works for us. AI users should keep this in mind when using advanced tools to do their business research. It’s the human user who is in charge, who needs to lead – and the best way to do that is to utilize their experience and knowledge to ensure that the results AI tools provide are accurate, correct, and logical.

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post To Get the Most Out of AI, You Need to ‘Boss It Around’ appeared first on AiThority.

]]>
AiThority Interview with Questionnaire for Jean-Philippe Desbiolles – IBM Managing Director – Groupe Crédit Mutuel https://aithority.com/machine-learning/aithority-interview-with-questionnaire-for-jean-philippe-desbiolles-ibm-managing-director-groupe-credit-mutuel/ Thu, 01 Aug 2024 12:40:31 +0000 https://aithority.com/?p=574247 AiThority Interview with Questionnaire for Jean-Philippe Desbiolles - IBM Managing Director – Groupe Crédit Mutuel

 Jean-Philippe Desbiolles – IBM Managing Director – Groupe Crédit Mutuel, talks about the use of cutting-edge technologies, importance of ethical AI adoption and the significant impact of machine learning and automation on cooperative banking. _______ Hello Jean-Philippe! Welcome to our AiThority Interview Series. Please share your journey and learnings at Crédit Mutuel Group as IBM […]

The post AiThority Interview with Questionnaire for Jean-Philippe Desbiolles – IBM Managing Director – Groupe Crédit Mutuel appeared first on AiThority.

]]>
AiThority Interview with Questionnaire for Jean-Philippe Desbiolles - IBM Managing Director – Groupe Crédit Mutuel

 Jean-Philippe Desbiolles – IBM Managing Director – Groupe Crédit Mutuel, talks about the use of cutting-edge technologies, importance of ethical AI adoption and the significant impact of machine learning and automation on cooperative banking.

_______


Hello Jean-Philippe! Welcome to our AiThority Interview Series. Please share your journey and learnings at Crédit Mutuel Group as IBM Managing Director.

My journey at Crédit Mutuel Group is marked by one word: a trusted partnership between Crédit Mutuel Euro-Information (Crédit Mutuel Alliance Fédérale’s technology subsidiary) & IBM. We are here to assist the Group in the implementation and acceleration of their transformation leveraging technology to serve humans in their daily jobs.

Innovation is at the core of Crédit Mutuel Alliance Fédérale and Euro-Information’s technological journey. Since 2016, Crédit Mutuel Alliance Fédérale invests in cutting-edge technologies ranging from AI to Quantum Computing.

Also Read: AiThority Interview with Carolyn Duby, Field CTO and Cyber Security GTM Lead at Cloudera

One of the key milestones was leveraging IBM Watson[i] first to transform customer relations and enhance operational efficiency. Throughout this journey, we’ve learned that AI technology can significantly improve both customer and advisor experiences, leading to more personalized and efficient services.

We have recently announced the expansion of our long-term collaboration via the IBM watsonx platform — an AI and data platform designed to help businesses develop responsible AI — deployed on Credit Mutuel’s in-house computing infrastructure. This collaboration will make it possible to accelerate and industrialize the deployment of generative AI.

Additionally, our collaboration highlighted that embracing innovative technologies is essential for staying competitive and meeting the evolving needs of customers.

What are the key ethical considerations and challenges you foresee as AI adoption accelerates in the finance industry?

 One word is key: TRUST. It is all about trust as without trust, there is no adoption and without adoption, no ROI.

When considering the acceleration of AI adoption in the finance industry, there are several key ethical considerations and challenges that must be addressed.

I like to take the image of “Russian dolls” to illustrate my thoughts where the biggest doll is about ethical and societal concerns, the next one is about regulation, the other one is about corporation values & conduct guidelines, the other one is about operating model and finally the technology platform which enables all of this.

It is important to have a code of ethics to be sure that everyone in the enterprise is aligned on the use and consequences of AI. Establishing ethical guidelines for AI use in finance to ensure fairness, transparency, and accountability is essential.

 I am convinced that any corporation has to share explicitly with their employees the rules of the game, meaning what is or is not tolerated, accepted, or promoted, … if this is not done, humans will do what they think is appropriate and this could lead to serious breaches with corporate and societal values. So, think collectively about these rules, ensure they are shared, known and adopted across the whole company. This is precisely what Crédit Mutuel has done by adopting an AI Code of Ethics along with tools and processes to implement it.

Of course, data privacy and security are a major challenge: protecting the data that fuels AI models is crucial as financial data is highly sensitive, and breaches can lead to severe consequences.

One other key ethical & societal consideration and challenge include ensuring the transparency and explainability of AI systems to maintain trust and accountability. It’s crucial to address bias and fairness in AI algorithms to prevent discriminatory practices and ensure equal treatment of all customers.

Crédit Mutuel is renowned as one of France’s top cooperative banks. How is machine learning and automation specifically tailored to meet the needs of cooperative banking, and what benefits has Crédit Mutuel observed from it.

Operating as a sovereign technology bank, Crédit Mutuel Alliance Fédérale stands out for its ability to carry out almost all of this IT processing in its own datacenters — an approach underpinned by the historic collaboration established between the teams of Euro-Information, under the leadership of Frantz Rublé, CEO of Euro-Information, and IBM.

In 2016, Crédit Mutuel Alliance Fédérale embarked on a strategic partnership with IBM to harness the power of artificial intelligence to support its employees. This collaborative effort led to the development and implementation of innovative AI tools. A year and a half later, 25,000 advisors at Crédit Mutuel Alliance Fédérale (Crédit Mutuel branches and CIC agencies), were using the tool on a daily basis to reduce the time spent on administrative tasks, such as data entry, signatures, and search. As a result, in 2022, the equivalent work hours of nearly 1,600 full-time employees were freed up for the benefit of customers and members who want a closer relationship with their local adviser.

In 2023, AI freed up nearly 1 million hours of administrative work to enable its 25,000 advisors to continue to best serve their members and clients showcasing Crédit Mutuel’s commitment to leveraging advanced technology for improved client relationship.

For the past eight years, the success of Crédit Mutuel Alliance Fédérale collaboration with IBM in artificial intelligence technologies has demonstrated the relevance of their strategy combining mutualist commitment and innovation. With watsonx, the Euro-Information and IBM teams gathered within a Cognitive Factory led by Laurent Prud’hon, Head of Cognitive Factory, are working on the industrialization of 35 new use cases to enable the banking advisors to always offer the best possible services to their customers and members.

Also Read: Role of AI in Cybersecurity: Protecting Digital Assets From Cybercrime

How do quantum computing, and cybersecurity technologies contribute to shaping the future of financial services at Crédit Mutuel Group?

 Back in 2016, Crédit Mutuel was among the first financial institutions to apply artificial intelligence and its industrialization. Their ambition for quantum computing is similar: to explore, then industrialize, in order to further transform the banking and insurance businesses, all with the underlying goal of also keeping their customers’ information secure. Because banking and insurance are technological industries, it is essential to constantly innovate to master the technologies of the future, and to ensure that they help guarantee sovereignty.

After a successful initial phase, we have identified specific use cases, among many areas of interest in financial services, for the next “scaling” phase, including research into customer experience, fraud management and risk management. This phase also intends to explore possibilities for how quantum computing could lead to future improvements in Crédit Mutuel Alliance Fédérale’s customer and employee experience.

Looking ahead, what are the top challenges of AI and automation adoption in the finance sector?

The 3 key challenges are:

1/ Think business processes reengineering. AI is highly transformational, its power forces us to re-think and re-design critical business processes. Very few clients are really at this stage, we continue to (just) add AI to business processes. This has to change, AI maturity is there, let’s be bold and dare!

2/ Be trusted! As said, trust is key, it is not negotiable. It means that our clients have to implement in parallel the appropriate operating and technology model to do it. It’s about the AI platform, the right governance, the trusted models and the most effective techniques to improve model performance. For IBM, it’s watsonx, Granite models and Instructlab.

3/ AI at scale. POCs and MVPs lead to a situation where many AI initiatives are undertaken but too often we see small projects in many places. Today, our clients want to infuse AI at scale within their organisation. To do so, we have to deploy AI & Data Factories, with the right skill sets, tooling and methods. From the very beginning, Euro-Information has been a pioneer in adopting this AI at scale and industrialization strategy, and the results and benefits speak for themselves. We know how to do it, let’s make it happen!

Could you recommend a thought leader in the AI industry whose perspectives on AI’s future you find particularly insightful and would like to share with our audience?

 I have a lot of humility… so, I would allow myself to share with you the 2 books I published in the last 3 years: in 2021, “AI will be what you make of it” and in 2023, “Human or AI, who will decide the future?”.

To make a long story short, the first one was based on a simple belief: we need to embrace AI to master it. Pointless to push back, it’s a structural change, a real industrial revolution.

I structured my book around 10 golden rules based on the projects I led in ASIA, USA and Europe.

The second book is about another conviction and statement namely the collaboration between AI & Humans. In some cases, humans have to act alone, in others machines have to decide alone and, in many cases, the collaboration between human and machine lead to the best decision. The question is: which scenario for which use cases? The book brings some tips, methods, rationale to help taking such decisions.

Last thing I want to share, if you buy these books, you will not enrich me, as 100% of the author rights are going to children hospital foundation.  So, enjoy… and give me your feed backs…

Also Read: AI and Social Media: What Should Social Media Users Understand About Algorithms?

Thank you, Jean-Philippe, for sharing your insights with us.

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post AiThority Interview with Questionnaire for Jean-Philippe Desbiolles – IBM Managing Director – Groupe Crédit Mutuel appeared first on AiThority.

]]>
Protect AI Acquires SydeLabs to Red Team Large Language Models https://aithority.com/machine-learning/generative-ai/protect-ai-acquires-sydelabs-to-red-team-large-language-models/ Thu, 01 Aug 2024 06:28:51 +0000 https://aithority.com/?p=574662 Protect AI Acquires SydeLabs to Red Team Large Language Models

SydeLabs’ SydeBox extends Protect AI’s AI-Security Posture Management platform with advanced cyber attack testing for LLMs Protect AI, a leader in AI security, announced the acquisition of SydeLabs, which specializes in the automated attack simulation (red teaming) of generative AI (GenAI) systems. This strategic acquisition enhances the Protect AI platform’s ability to test and improve […]

The post Protect AI Acquires SydeLabs to Red Team Large Language Models appeared first on AiThority.

]]>
Protect AI Acquires SydeLabs to Red Team Large Language Models

SydeLabs’ SydeBox extends Protect AI’s AI-Security Posture Management platform with advanced cyber attack testing for LLMs

Protect AI, a leader in AI security, announced the acquisition of SydeLabs, which specializes in the automated attack simulation (red teaming) of generative AI (GenAI) systems. This strategic acquisition enhances the Protect AI platform’s ability to test and improve LLM security and extends the company’s lead as the only provider of end-to-end AI security solutions.

“We couldn’t be more excited about joining the Protect AI mission and the prospect of what we can achieve in terms of helping companies of all sizes adopt and deploy more secure LLMs and AI applications.”

SydeLabs: A Leader in AI Red Teaming

Generative AI and LLM adoption are revolutionizing industries. LLMs are being integrated into critical end user applications such as customer service, finance and healthcare. However the complexity and scale of the technology has exacerbated security concerns that traditional application security processes simply can not keep up with or address effectively.

SydeLabs was founded less than a year ago by former product and engineering leads from Google and MPL, and has quickly established itself as a pioneer in the field of AI security. Based in Bangalore, India, SydeLabs has developed SydeBox, a cutting-edge product designed to provide comprehensive vulnerability assessments for GenAI systems. The talented team from SydeLabs will join Protect AI where they will continue to add local talent in Bangalore to complement our Seattle and Berlin based teams.

“Protect AI is continuously looking to add products to our AI security posture management platform that help our customers build a safer AI-powered world,” said Ian Swanson, CEO of Protect AI. “The acquisition of SydeLabs extends the Protect AI platform with unmatched red teaming capabilities and immediately provides our customers with the ability to stress test, benchmark and harden their large language models against security risks.”

Also Read: AI Inspired Series by AiThority.com: Featuring Bradley Jenkins, Intel’s EMEA lead for AI PC & ISV strategies

SydeBox will be integrated into the Protect AI Platform and rebranded as Protect AI Recon. Recon identifies potential vulnerabilities in LLMs, ensuring enterprises can deploy AI applications with confidence. Key features of Recon include no-code integration, model-agnostic scanning, and detailed threat profiling across multiple categories. Recon uses both an attack library and LLM agent based solution for red teaming and evaluating the security and safety of GenAI systems. Protect AI Recon aligns perfectly with the growing demand for robust AI security solutions, driven by formal guidance from NIST, MITRE, OWASP and CISA, as well as mandates like the Executive Order on AI Safety and Security and the EU AI Act.

“The combination of SydeLabs’ SydeBox and Protect AI’s platform provides customers a comprehensive defense-in-depth solution for building, managing, testing, deploying and monitoring LLMs,” said Ruchir Patwa, co-founder of SydeLabs. “We couldn’t be more excited about joining the Protect AI mission and the prospect of what we can achieve in terms of helping companies of all sizes adopt and deploy more secure LLMs and AI applications.”

Also Read: Extreme Networks and Intel Join Forces to Drive AI-Centric Product Innovation

The new Recon product will enable Protect AI to meet growing customer demand for robust AI security solutions. Customers will benefit from detailed threat profiling across jailbreaks, prompt injection attacks, input manipulations and other attack vectors, which are crucial for maintaining the integrity and security of AI systems. Recon covers six of the OWASP Top 10 for LLM applications.

“Recon, formally SydeBox, has enabled us to identify and fix security blindspots before deploying our GenAI solutions to ensure we are building the most secure and safe LLM powered applications, and that products we serve our customers are free from any security or safety loopholes,” said Kiran Darisi, CTO and cofounder, AtomicWork.

This acquisition and new product, Recon, further enhances Protect AI’s position as the leader in the AI security market and AI Security Posture Management (AI-SPM) solutions, differentiating it from competitors and solidifying its market presence. More specifically when used alongside Layer, Protect AI’s LLM observability and monitoring solution, Recon enables organizations to harden the implementation of LLMs against the spectrum of emerging security concerns associated with GenAI usage. Partners and stakeholders will also gain from the enhanced security capabilities, ensuring that the entire AI ecosystem is better protected against potential threats.

Also Read: More than 500 AI Models Run Optimized on Intel Core Ultra Processors

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post Protect AI Acquires SydeLabs to Red Team Large Language Models appeared first on AiThority.

]]>
HatchWorks AI Unveils GenIQ: Revolutionizing Software Development with AI-Driven Process Intelligence https://aithority.com/technology/analytics/business-intelligence/hatchworks-ai-unveils-geniq-revolutionizing-software-development-with-ai-driven-process-intelligence/ Thu, 01 Aug 2024 06:01:22 +0000 https://aithority.com/?p=574658 HatchWorks AI Unveils GenIQ: Revolutionizing Software Development with AI-Driven Process Intelligence

HatchWorks AI announces the launch of GenIQ, an AI-driven process intelligence platform transforming software development. Utilizing Bloomfilter and HatchWorks’ Generative-Driven Development, GenIQ identifies inefficiencies throughout the software development lifecycle (SDLC) and pinpoints where best to apply AI to maximize its effectiveness. Also Read: AI Inspired Series by AiThority.com: Featuring Bradley Jenkins, Intel’s EMEA lead for AI PC […]

The post HatchWorks AI Unveils GenIQ: Revolutionizing Software Development with AI-Driven Process Intelligence appeared first on AiThority.

]]>
HatchWorks AI Unveils GenIQ: Revolutionizing Software Development with AI-Driven Process Intelligence

HatchWorks AI logo

HatchWorks AI announces the launch of GenIQ, an AI-driven process intelligence platform transforming software development. Utilizing Bloomfilter and HatchWorks’ Generative-Driven Development, GenIQ identifies inefficiencies throughout the software development lifecycle (SDLC) and pinpoints where best to apply AI to maximize its effectiveness.

Also Read: AI Inspired Series by AiThority.com: Featuring Bradley Jenkins, Intel’s EMEA lead for AI PC & ISV strategies

“GenIQ is a transformational approach to software development,” said Brandon Powell, CEO at HatchWorks AI. “Rooted in our pioneering Generative-Driven Development™ methodology, GenIQ empowers technology leaders to identify inefficiencies and leverage generative AI as a competitive advantage, ensuring projects are completed on time and within budget, setting new benchmarks for innovation.”

Also Read: Extreme Networks and Intel Join Forces to Drive AI-Centric Product Innovation

Generative AI promises to enhance development productivity, however, measuring ROI is difficult. 56% of enterprise leaders believe ROI is positive, but aren’t precisely measuring. GenIQ not only helps you identify and address gaps in your SDLC with AI-driven process intelligence but also measure the ROI of AI.

GenIQ offers unmatched transparency and predictability, integrating with systems like Jira, GitHub, Figma, Asana, and Azure, enabling leaders to:

  • Observe, measure, and improve process productivity.
  • Identify optimal areas for Gen AI application.
  • Measure the ROI of Gen AI initiatives.
  • Make informed, ROI-driven decisions with advanced predictability on project timelines, costs, and outputs.

“We’re thrilled to join forces with HatchWorks AI to roll out GenIQ,” said Erik Severinghaus, Co-Founder & Co-CEO of Bloomfilter. “Having witnessed firsthand the transformative power of process mining the SDLC, we know it can significantly boost team success. Leveraging AI to enhance productivity and make software development more observable, predictable, and efficient not only saves time and reduces waste but also ensures successful software delivery.”

Also Read: More than 500 AI Models Run Optimized on Intel Core Ultra Processors

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post HatchWorks AI Unveils GenIQ: Revolutionizing Software Development with AI-Driven Process Intelligence appeared first on AiThority.

]]>
Generative AI in Healthcare: Key Drivers and Barriers to Innovation https://aithority.com/machine-learning/generative-ai/generative-ai-in-healthcare-key-drivers-and-barriers-to-innovation/ Wed, 31 Jul 2024 10:24:00 +0000 https://aithority.com/?p=574296 Drivers and barriers of Generative AI in Healthcare

The integration of artificial intelligence (AI) and generative AI (GenAI) into the healthcare industry introduces countless possibilities for improving patient care and outcomes. GenAI has the potential to revolutionize how healthcare professionals gather and analyze data for diagnosis and treatment. According to a December 2023 Gartner Healthcare Provider Research Panel survey, 84% of healthcare provider […]

The post Generative AI in Healthcare: Key Drivers and Barriers to Innovation appeared first on AiThority.

]]>
Drivers and barriers of Generative AI in Healthcare
The integration of artificial intelligence (AI) and generative AI (GenAI) into the healthcare industry introduces countless possibilities for improving patient care and outcomes. GenAI has the potential to revolutionize how healthcare professionals gather and analyze data for diagnosis and treatment.
According to a December 2023 Gartner Healthcare Provider Research Panel survey, 84% of healthcare provider executives believe large language models (LLMs) — the foundation of GenAI — will have a significant (35%), transformative (37%), or disruptive (12%) impact on the healthcare industry overall.

The year 2024 marks a pivotal moment in the healthcare landscape, characterized by the rapid integration and evolution of generative artificial intelligence (AI). This technological revolution has unleashed a wave of innovations, transforming the way healthcare is delivered, managed, and experienced worldwide.

Also Read: Understanding Shadow AI: Key steps to Protect your Business

The Key GenAI drivers in Healthcare

#1 Data Generation and Augmentation:

Synthetic data generation and augmentation are crucial drivers of generative AI (GenAI) in the healthcare industry. By producing synthetic data, healthcare professionals can overcome limitations associated with real-world data (RWD). Synthetic data is essential for training machine learning models, enhancing their accuracy and diversity by upsampling rare events or patterns. This technique allows for the expansion of datasets without additional real data collection, optimizing information extraction and improving diagnostic accuracy. Moreover, synthetic data addresses privacy concerns by reproducing population characteristics without direct links to individuals, significantly reducing the risk of identity disclosure. This enhances patient trust and facilitates data sharing, which is often hindered by regulatory and ethical concerns. Synthetic data mimics real datasets while preserving critical information such as feature correlations and parameter distributions, making it valuable for statistical modeling, hypothesis-generating studies, and educational purposes. Additionally, it helps mitigate bias in machine learning algorithms by incorporating data from underrepresented populations, leading to more equitable and effective healthcare solutions. Projects like Simulacrum demonstrate the practical applications of synthetic data, providing synthetic cancer data that supports research without compromising patient privacy.

#2 Drug Discovery and Development

Generative AI (GenAI) is poised to revolutionize drug discovery and development in the healthcare industry. One of the most groundbreaking impacts of GenAI in 2024 is its role in advancing personalized medicine. By analyzing genetic makeup, lifestyle factors, and medical histories, AI algorithms can generate personalized treatment plans tailored to an individual’s unique biological characteristics. This approach ensures more effective and targeted therapies while minimizing adverse effects.

Furthermore, GenAI has significantly transformed the drug development process. AI-powered algorithms can predict potential drug interactions, analyze molecular structures, and simulate drug behavior, thereby accelerating the discovery and development of new medications. This technological advancement has led to the rapid introduction of groundbreaking drugs designed to target specific genetic profiles and disease characteristics.

GenAI’s contributions extend beyond drug discovery and development. It enhances patient outcomes by predicting disease progression and treatment responses more accurately through the analysis of electronic health records (EHRs) and other patient data. This allows healthcare providers to make more informed decisions regarding treatment options and resource allocation.

#3 Personalized Medicine

Generative AI (GenAI), a sophisticated type of artificial intelligence, has the potential to revolutionize the healthcare industry. GenAI can create new content, such as text, code, and images, and although it is still under development, its applications in personalized medicine are particularly promising. Personalized medicine is an approach to healthcare that considers each individual’s unique genetic makeup, environment, and lifestyle, thereby improving diagnostic accuracy and treatment efficacy and reducing the risk of side effects.

Applications of GenAI in Personalized Medicine:

1. Drug Discovery

2. Drug Development

3. Diagnosis

4. Treatment

5. Prevention

#4 Medical Imaging and Diagnostics

Generative AI (GenAI) is revolutionizing medical imaging and diagnostics, significantly enhancing the accuracy and efficiency of healthcare delivery. By synthesizing realistic medical images, GenAI addresses the scarcity of annotated data, improving the generalizability of imaging models and facilitating the development of advanced imaging algorithms. In image denoising and enhancement, GenAI reduces noise and enhances visual clarity, aiding radiologists and clinicians in accurate assessments. GenAI also excels in image reconstruction and super-resolution, providing complete views for analysis and enabling visualization of fine details.

Moreover, GenAI automates image segmentation, accurately delineating organs, tumors, or abnormalities, which aids in treatment planning, surgical interventions, and disease monitoring. These innovations in medical imaging and diagnostics demonstrate GenAI’s transformative impact on healthcare.

#5 Content Creation

GenAI’s capabilities in content generation and hyperpersonalization are key drivers in the pharma industry. It can create personalized content tailored to individual healthcare providers or patients’ micropreferences, leading to up to 40% better engagement rates on digital channels like emails, web, and banner ads. This approach involves defining the taxonomy of tagging to learn from history, developing an operating model to assemble and pre-approve content variants, and piloting content hyper-personalization (CHP) to uncover opportunities.

Key Benefits:

  • Content Tagging: Achieves 50% faster-automated tagging, enhancing efficiency and accuracy.
  • Content Hyperpersonalization: Generates personalized content variants, increasing engagement by up to 25%.
  • MLR Acceleration: Speeds up medical-legal-regulatory approvals with improved similarity estimates, enhancing the approval process by 33%.

#6 Automation and Efficiency in Clinical Workflows

Generative AI (GenAI) is revolutionizing clinical workflows by enhancing automation and efficiency across several critical areas. In patient intake and data management, automation simplifies registration, scheduling, and data processing, reducing manual errors and speeding up the intake process. Tools like Thoughtful’s Patient Intake and Prior Authorization Module ensure accurate and accessible patient data, leading to improved treatment precision and patient satisfaction.  GenAI also transforms treatment planning and management by analyzing extensive data to suggest personalized treatment plans, optimizing treatment efficacy and resource use. In revenue cycle management, automation streamlines b******, c***** processing, and payment collections improving financial operations and ensuring steady cash flow for healthcare providers. Additionally, post-care coordination benefits from automation through scheduling and patient monitoring tools, which facilitate timely follow-up care and ensure adherence to treatment plans, ultimately improving health outcomes.

The benefits of automation in clinical workflows are substantial. It increases efficiency and saves time by automating administrative and clinical tasks, allowing more focus on direct patient care and speeding up diagnostic and treatment processes. Automation enhances accuracy and reduces errors, ensuring safer and more reliable patient care. It improves patient satisfaction by accelerating service delivery and providing a more efficient overall experience through automated reminders and timely procedures. Automation also enables the scalability of healthcare services, adapting efficiently to increased patient loads while maintaining service quality. Furthermore, it reduces costs by lowering labor expenses, minimizing errors, managing inventory effectively, and optimizing resource allocation.

Generative AI Barriers in the Healthcare Industry

Generative AI (Gen AI) adoption in the healthcare industry, while progressing, faces several significant barriers despite its strong readiness across technology, data, people, and processes. Research by Everest Group indicates that healthcare is well-prepared for Gen AI, lagging only behind banking and financial services in terms of readiness. However, several inherent challenges impede industry-wide adoption.

1. Data Privacy Concerns

A critical barrier to Gen AI adoption in healthcare is data privacy. The sector handles vast amounts of sensitive patient information that necessitate stringent protection measures. Ensuring robust data privacy is essential to maintaining trust and compliance, given the sensitive nature of health data.

2. Accuracy and Human Oversight

Processes involving clinical decision-making require high levels of accuracy and human oversight. The stakes are exceptionally high in healthcare, where the precision of AI-driven insights can directly impact patient outcomes. Ensuring the reliability of Gen AI models while integrating human oversight remains a significant challenge.

3. Regulatory Complexity

Regulatory compliance presents a notable hurdle for Gen AI adoption. Healthcare providers must navigate a complex landscape of compliance requirements, with 70 percent of organizations identifying regulatory issues as a potential barrier. Adhering to these regulations while implementing Gen AI solutions is crucial for successful adoption.

4. Talent Readiness

The effective deployment of Gen AI solutions in healthcare requires a broad range of specialized skills. Talent readiness is a concern, with only 35 percent of healthcare organizations reporting sufficient AI engineers and less than half having adequate data scientists and software developers. The shortage of skilled professionals impacts model training, testing, and validation efforts.

5. Innovation and Model Adaptation

Many organizations are innovating to address challenges related to infrastructure, computing power, and scalability required by Large Language Models (LLMs). Leading entities are now focusing on smaller language models or proprietary custom models tailored to specific healthcare needs. These specialized models aim to mitigate concerns related to accuracy and bias, offering a promising solution to some of the barriers faced.

Also Read: AiThority Interview with Brian Stafford, President and Chief Executive Officer at Diligent

Transformative Impact of Generative AI in Healthcare 

Advancing Clinical Decision-Making

Generative AI enables the swift analysis of complex medical data, facilitating precise diagnoses and personalized treatment plans. This optimization of resources enhances the accuracy and efficiency of clinical decisions.

Elevating Patient Engagement

Personalized health information, powered by AI, empowers patients to take an active role in their healthcare. This increased engagement improves adherence to treatment plans and fosters better collaboration between patients and healthcare providers.

Expanding Access to Healthcare

AI-driven telemedicine and remote monitoring technologies bridge gaps in healthcare delivery, ensuring high-quality care regardless of geographical location. This expansion of access democratizes healthcare, making it more inclusive and equitable.

Streamlining Data Management

Generative AI improves the management of vast amounts of health data, ensuring it is accessible, secure, and easily shareable. This efficiency in data handling supports better coordination and continuity of care across the healthcare ecosystem.

Top Generative AI in Healthcare Startups

Huma.AI

Medical IP

Abridge

Hippocratic AI

Pingoo

What does GenAI’s Future look like in the Healthcare Industry?

The future of Generative AI (GenAI) in healthcare is poised to revolutionize medical care delivery, research, and personalization, driven by rapid technological advancements and shifting market dynamics. Several key areas are expected to shape the integration and impact of GenAI across the healthcare sector.

According to a BCG article, GenAI holds the potential to customize medical devices, such as prosthetics and implants, to individual patients. These tailored devices will not only offer improved fit but also incorporate self-maintenance and repair capabilities. Additionally, GenAI can analyze and predict changes in brain health over time, enabling physicians to identify and address cognitive issues or neurodegenerative disorders at earlier stages.

Future applications of GenAI may further enhance data collection and analysis through remote monitoring systems, leading to more effective patient interventions. Furthermore, GenAI could advance quality control measures by predicting when medical devices and equipment require maintenance, allowing caregivers to schedule repairs proactively and minimize downtime.

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post Generative AI in Healthcare: Key Drivers and Barriers to Innovation appeared first on AiThority.

]]>
Zenity Announces Strategic Investment Led by M12 https://aithority.com/botsintelligent-assistants/zenity-announces-strategic-investment-led-by-m12/ Wed, 31 Jul 2024 07:13:18 +0000 https://aithority.com/?p=574568 Zenity Announces Strategic Investment Led by M12

Zenity, a leader in securing enterprise copilots and low-code development, is excited to announce a strategic investment led by M12, Microsoft’s Venture Fund. Ben Kliger, Zenity CEO and co-founder, stated, “We are excited to be partnering with M12 to continue our mission of helping enterprises securely unleash the use of AI copilots and low-code development. Partnering […]

The post Zenity Announces Strategic Investment Led by M12 appeared first on AiThority.

]]>
Zenity Announces Strategic Investment Led by M12

Zenity logo (PRNewsfoto/Zenity)

Zenity, a leader in securing enterprise copilots and low-code development, is excited to announce a strategic investment led by M12, Microsoft’s Venture Fund.

Ben Kliger, Zenity CEO and co-founder, stated, “We are excited to be partnering with M12 to continue our mission of helping enterprises securely unleash the use of AI copilots and low-code development. Partnering with Microsoft is a strategic move for Zenity as Microsoft’s global reach, robust technology stack, and commitment to innovation align perfectly with our vision. This investment allows us to leverage Microsoft’s resources to accelerate our growth and work closely on a joint go-to-market strategy, enhancing the security and success of our mutual customers.”

Also Read: Appdome Unveils GenAI-Powered Mobile Threat Resolution

As enterprises rush to adopt enterprise copilots and low-code development platforms, such as Microsoft’s own offerings of Microsoft Copilot, Power Platform, Copilot Studio, and Fabric, it is the first time that business users are at the forefront of, and in control of, business application development. In fact, Gartner estimates that by 2026, more than 80% of enterprises will have deployed GenAI-enabled applications in production environments, and also that by 2025, 70% of new applications developed by enterprises will use low-code/no-code technology.

Also Read: SingleStore Launches dot_product Accelerator AI Program

This tectonic market shift in enterprise means that most application development is happening outside of IT. This means applications are being developed without traditional safeguards; namely IT involvement, software development lifecycle (SDLC), CI/CD security tooling, and traditional AppSec tools that rely on code scanning to spot vulnerabilities. This results in a tidal wave of Shadow Application Development.

Zenity researchers have found upwards of 79,000 applications being developed per organization using copilots and low-code platforms. This research further found that over 60% of these applications contain serious security vulnerabilities; be it hard-coded secrets, untrusted guests gaining privileged access to critical assets, over-exposing bots to the public internet, or apps and copilots with poorly configured authentication mechanisms.

Zenity was founded in 2021 to bring application security controls to business-led development happening across these enterprise copilots and low-code development platforms. Zenity currently helps many Fortune 500 enterprises to understand what business users are building across their copilot and low-code estate, identifying security vulnerabilities and development and runtime, and helping to automatically remediate critical findings at scale.

Also Read: AI and Big Data Governance: Challenges and Top Benefits

“M12 is committed to driving innovation, particularly through AI and low-code development, which are key to enhancing innovation, productivity, and efficiency. Our investment in Zenity underscores the urgent need for security solutions as enterprise users increasingly utilize developer-level capabilities to process data at AI speeds, without existing safeguards for app security,” said Jason McBride, Investment Partner at M12. “Witnessing Zenity’s impact within Microsoft firsthand showcased the substantial benefits it provides to its customers, solidifying our decision to deepen our partnership with Zenity. This collaboration is pivotal as enterprises strive to securely leverage enterprise copilots and low-code/no-code platforms.”

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post Zenity Announces Strategic Investment Led by M12 appeared first on AiThority.

]]>
Endava Partners with OpenAI to Deploy ChatGPT Enterprise Throughout Organisation https://aithority.com/machine-learning/endava-partners-with-openai-to-deploy-chatgpt-enterprise-throughout-organisation/ Tue, 30 Jul 2024 17:05:04 +0000 https://aithority.com/?p=574535 Endava Partners with OpenAI to Deploy ChatGPT Enterprise Throughout Organisation

CTO says the AI technology will have a “transformative impact” on the value delivered to customers Endava, a global provider of digital transformation, agile development and intelligent automation services, announces a strategic deal with OpenAI to deploy ChatGPT Enterprise licenses to all of its 11,000+ global employees. Also Read: AI and Big Data Governance: Challenges and […]

The post Endava Partners with OpenAI to Deploy ChatGPT Enterprise Throughout Organisation appeared first on AiThority.

]]>
Endava Partners with OpenAI to Deploy ChatGPT Enterprise Throughout Organisation

CTO says the AI technology will have a “transformative impact” on the value delivered to customers

Endava, a global provider of digital transformation, agile development and intelligent automation services, announces a strategic deal with OpenAI to deploy ChatGPT Enterprise licenses to all of its 11,000+ global employees.

Also Read: AI and Big Data Governance: Challenges and Top Benefits

“AI is set to play a crucial role in revolutionising product innovation, streamlining business operations and shaping technologies to meet and exceed customer expectations”

The collaboration marks a significant step forward in Endava’s commitment to leveraging cutting-edge AI technology to drive results for its customers. The organization’s focus will be on using ChatGPT’s advanced language models and capabilities to enhance operations, drive innovation and help achieve accelerated impact for clients.

Also Read: Building a Content Supply Chain in the Era of Generative AI

“AI is set to play a crucial role in revolutionising product innovation, streamlining business operations and shaping technologies to meet and exceed customer expectations,” said Matt Cloke, Endava’s CTO. “OpenAI is at the forefront of generative AI technology and through ChatGPT Enterprise, Endava gains access to enterprise-grade security and privacy with the most powerful version of ChatGPT yet. As we embark on this journey, I look forward to seeing the transformative impact it will have on our work and the value we deliver to our clients.”

A team of “ChatGPT Champions” from across business functions have already piloted the technology over a number of months, integrating it with internal systems to establish best practices and pave the way for a seamless, company-wide integration. To ensure that employees maximise the benefits of this powerful tool whilst using it responsibly, Endava will launch a mandatory training module on the use of AI.

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post Endava Partners with OpenAI to Deploy ChatGPT Enterprise Throughout Organisation appeared first on AiThority.

]]>