Don’t miss the opportunity to dive deep into the latest tips and tools from Intel® Software and learn more about Intel’s resources and hardware. Test them out yourself and consult our experts onsite at this year’s hands-on training for High Performance Computing & Artificial Intelligence!
Join us for 2 days of hands-on coding sessions on Parallel Programming, Performance Optimization, Artificial Intelligence, and Machine & Deep Learning.
Explore exciting topics including scientific and technical computing, computer vision, image and pattern recognition, machine learning, optimized deep learning and big data analytics.
Please bring your own – Intel®-based – laptop1. We will provide all required software and technology. Detailed technical requirements will be sent to registered attendees.
You are welcome to attend both days but you can also attend one day only. Please specify at registration! Attendance is free, please register as soon as possible as this time we only have very limited seats. After your registration we will review and will confirm your attendance as soon as possible!
In this workshop we will dive into the code modernization framework to achieve the highest performance possible. With the help of examples, use cases and a better usage of the Intel® C/C++ compiler, we pinpoint you to possible inefficiencies, hints and strategies to ensure an application delivers great performance.
AGENDA DAY 1**
For enterprise to cloud, as well as from high-performance computing (HPC) to AI.
This session will introduce intel tools and the different suites available for writing codes for single or multi-nodes computers as well as analyzing the performance.
Step by step, the attendee will be able to understand how to modify the code of an N-body simulation code, simulation of a dynamical system of particles under the influence of gravity forces, thru different optimization stages spanning from scalar tuning, vectorization tuning, memory tuning and threading tuning.
The session will cover Intel Compiler.
Intel Advisor is a powerful tool for tracking down and solving vectorization problems. This demo will introduce Intel Advisor and especially the survey and the trip count analyses and we will be able to track all the performance issues of the previous introduced N-body simulation code. We will explain how to read and interpret Advisor's outputs to improve the vectorization and introduce the roofline analysis
This session will cover Intel Advisor and Roofline Model
Why would you spend time on optimizing functions in your source code when others already done the work for you ? Intel’s performance libraries like MKL can help to further increase code performance.
______________________________
During this session we will show how performance analysis tools like Intel® Advisor and Intel® VTune™ Amplifier can be used to investigate issues and guarantee the optimal performance. We will show how to use the Intel® Distribution for Python giving insights for machine learning applications.
AGENDA DAY 2**
Optimized Software for Deep Learning and Machine Learning.
This session, after first introducting the Intel® Distribution for Python, will mainly cover and demonstrate Intel’s technical best practices for faster time to solution for developing and deploying Machine Learning and Deep Learning training on Intel CPU based platforms.
The session will cover Intel® Distribution for Python, Intel DAAL Library, NumPy / SciPy
In this session, we show you techniques we employed to accelerate Tensorflow using the highly-optimized open-source Intel MKL-DNN library.
Credit card fraud is an issue that retailers and many other businesses face. Machine Learning can be used to mitigate this problem. In this session we’ll use different supervised algorithm to detect fraud with a Kaggle dataset. We’ll use Logistic regression, neural networks, random forest and gradient boosting to predict frauds. We’ll also try unsupervised learning techniques such as K-Means clustering and see how they compare with supervised learning algorithms.
We’ll show how Intel has optimised the underlying libraries (TensorFlow and Scikit-Learn) with the MKL and the DAAL and how much speed we get from those optimisations.
In this session we will introduce the Intel® Distribution of OpenVINO™ Toolkit.
Based on convolutional neural networks (CNN), the toolkit extends workloads across Intel® hardware (including accelerators) and maximizes performance with deployment on Intel CPU, GPU, Movidius, and Intel FPGAs.