Discover Industry Breakthroughs Using AI Technology at Microsoft Build 2022

Join Microsoft Build 2022 to learn how NVIDIA AI technology solutions are transforming industries such as retail, manufacturing, automotive, and healthcare.

AI continues to transform global industries such as retail, manufacturing, automotive, and healthcare. NVIDIA partners with Microsoft Azure to provide developers with global access to AI infrastructure on-demand, simplified infrastructure management, and solutions deploying AI-enabled applications.

Join the NVIDIA team virtually at Microsoft Build May 24-26, and learn more about the latest developer technologies, tools, and techniques for data scientists and developers to take AI to production faster. Connect live with subject matter experts from NVIDIA and Microsoft, get your technical questions answered, and hear how customers like BMW and Archer Daniels Midland (ADM) are harnessing the power of NVIDIA technologies on Azure.

NVIDIA developer sessions at Microsoft Build 2022

The full NVIDIA content line-up can be found on our Microsoft Build showcase page.

Below is a quick preview:

Organizing data for machine learning

Live Customer Interview | 5/25, 10:45-11:15 am 
Isaac Himanga and Archer Daniels Midland, ADM
Watch the live interview. Registration is not required.

Many tools analyze equipment data to identify degraded performance or opportunities for improvement. What is not easy: finding relevant data for hundreds of thousands of assets to feed these models. ADM discusses how they’re organizing process data into a structure for quick deployment of AI to make data-based decisions. A new and better tool is needed to organize data and partnering with Sight Machine is helping move ADM closer to data-centric AI using NVIDIA GPU technology on Azure.

Azure Cognitive Service deployment: AI inference with NVIDIA Triton Server

Breakout | 5/25, 12-12:45 pm

Join this session to see how Azure Cognitive Services uses the NVIDIA Triton Inference Server for inference at scale. We highlight two use cases: deploying the first-ever Mixture of Expert model for document translation and acoustic model for Microsoft Teams Live Captioning. Tune in to learn about serving models with NVIDIA Triton, ONNX Runtime, and custom backends. 

How vision AI applications use NVIDIA DeepStream and Azure IOT edge services

Ask the Experts | 5/25, 1-1:30 pm

Join experts from NVIDIA and Microsoft where you can ask questions about developing applications Graph Composer and new DeepStream features, deployment through IoT Hub, connecting to other Azure IoT Services, or transmitting inference results to the cloud.

Discussing accelerated model inference for Azure ML deployment with ONNX-RT, OLive, NVIDIA Triton Inference Server and Triton Model Analyzer

Table Topic | 5/25, 2-2:30 pm

Leaving performance on the table for AI inference deployments leads to poor cloud infrastructure utilization, high operational costs, and sluggish UX. Learn how to optimize the model configuration to maximize inference performance by using ONNX Runtime, OLive, Azure ML, NVIDIA Triton Inference Server, and NVIDIA Triton Model Analyzer.

NVIDIA RAPIDS Spark plug-in on Azure Synapse

Video On-Demand

Accelerate your ETL and ML Spark applications using Azure Synapse with NVIDIA RAPIDS.

Hands-on labs, tutorials, and resources

As a developer, you are a key contributor to the advancement of every field. We have created an online space devoted to your needs, with access to free SDKs, technical documentation, peer and domain expert help, and information on hardware to tackle the biggest challenges.

Join the NVIDIA Developer Program for free and exclusive access to SDKs, technical documentation, peer, and domain expert help. NVIDIA offers tools and training to accelerate AI, HPC, and graphics applications.

Connect with NVIDIA at Microsoft Build 2022.

About admin

administrator

Leave a Reply

Your email address will not be published.

Previous post Optimize Your Ray Tracing Graphics with the New NVIDIA RTX Branch of Unreal Engine 5
Next post Accelerating AI Inference Workloads with NVIDIA A30 GPU