Accenture has announced a groundbreaking development in the realm of generative AI with the introduction of its AI Refinery framework, built on the NVIDIA AI Foundry platform. This new framework is designed to help clients create custom Large Language Models (LLMs) using the newly launched Llama 3.1 collection of openly available models. This step represents a major advancement in tailoring generative AI to meet specific business needs.
As enterprises continue to explore the transformative potential of generative AI, there is a growing need to refine and adapt underlying LLMs with unique data and processes. The AI Refinery framework, which is part of Accenture’s foundation model services, addresses this need by enabling businesses to develop custom LLMs imbued with domain-specific knowledge. This approach promises to enhance business operations and foster innovation across various industries.
Julie Sweet, Chair and CEO of Accenture, highlighted the significance of this launch. She noted that leading enterprises are keen on leveraging technology, data, and AI for transformation. According to Sweet, “Generative AI is reshaping industries, and businesses are eager to harness custom models to drive their strategic objectives. Accenture’s collaboration with NVIDIA allows us to help clients swiftly create and implement their bespoke Llama models, empowering them to achieve transformative results tailored to their priorities.”
Jensen Huang, Founder and CEO of NVIDIA, also commented on the partnership. He described the release of Meta’s openly available Llama models as a pivotal development for enterprise adoption of generative AI. Huang remarked, “Many enterprises are seeking expert guidance to develop custom Llama LLMs. With NVIDIA AI Foundry powering Accenture’s AI Refinery, businesses can benefit from comprehensive generative AI services, fostering growth through tailored model development and deployment.”
In addition to supporting external clients, Accenture is leveraging the AI Refinery framework to enhance its own operations. The company is initially applying the framework to its marketing and communications functions, with plans to extend it to other areas. This internal use of the AI Refinery will enable Accenture to rapidly develop generative AI applications that cater to its specific operational needs.
The AI Refinery framework comprises four essential components designed to help enterprises adapt and customise prebuilt foundation models effectively:
- Domain Model Customization and Training: This feature allows the refinement of prebuilt models with an organization’s data and processes, leveraging NVIDIA AI Foundry to drive significant business value.
- Switchboard Platform: This platform enables users to select a combination of models tailored to their business context or based on criteria such as cost or accuracy.
- Enterprise Cognitive Brain: This component scans and vectorizes corporate data into an enterprise-wide index, empowering generative AI systems with comprehensive business knowledge.
- Agentic Architecture: This architecture allows AI systems to operate autonomously, with the ability to reason, plan, and propose tasks with minimal human oversight.
Accenture’s AI Refinery services, which are available to all customers using Llama models, are supported by NVIDIA’s comprehensive AI Foundry service. This includes foundation models, NVIDIA NeMo, accelerated computing, expert support, and a wide partner ecosystem. The models developed through the AI Refinery can be deployed across major cloud platforms with a range of commercial options.
Discover more from Business-News-Today.com
Subscribe to get the latest posts sent to your email.