Quantcast
Channel: Intel Developer Zone Articles
Viewing all 1201 articles
Browse latest View live

How to Get Started as a Developer in AI

$
0
0

The promise of artificial intelligence has captured our cultural imagination since at least the 1950s—inspiring computer scientists to create new and increasingly complex technologies, while also building excitement about the future among regular everyday consumers. What if we could explore the bottom of the ocean without taking any physical risks? Or ride around in driverless cars on intelligent roadways? While our understanding of AI—and what’s possible—has changed over the the past few decades, we have reason to believe that the age of artificial intelligence may finally be here. So, as a developer, what can you do to get started? This article will go over some basics of AI, and outline some tools and resources that may help. 

First Things First—What Exactly is AI?

While there are a lot of different ways to think about AI and a lot of different techniques to approach it, the key to machine intelligence is that it must be able to sense, reason, and act, then adapt based on experience.

  • Sense—Identify and recognize meaningful objects or concepts in the midst of vast data. Is that a stoplight? Is it a tumor or normal tissue?
     
  • Reason—Understand the larger context, and make a plan to achieve a goal. If the goal is to avoid a collision, the car must calculate the likelihood of a crash based on vehicle behaviors, proximity, speed, and road conditions.
     
  • Act—Either recommend or directly initiate the best course of action. Based on vehicle and traffic analysis, it may brake, accelerate, or prepare safety mechanisms.
     
  • Adapt—Finally, we must be able to adapt algorithms at each phase based on experience, retraining them to be ever more intelligent. Autonomous vehicle algorithms should be re-trained to recognize more blind spots, factor new variables into the context, and adjust actions based on previous incidents.

What Does AI Look Like Today?

These days, artificial intelligence is an umbrella term to represent any program that can sense, reason, act, and adapt. Two ways that developers are actually getting machines to do that are machine learning and deep learning.

  • In machine learning, learning algorithms build a model from data, which they can improve on as they are exposed to more data over time. There are four main types of machine learning: supervised, unsupervised, semi-supervised, and reinforcement learning. In supervised machine learning, the algorithm learns to identify data by processing and categorizing vast quantities of labeled data. In unsupervised machine learning, the algorithm identifies patterns and categories within large amounts of unlabeled data—often much more quickly than a human brain could. You can read a lot more about machine learning in this article.
     
  • Deep learning is a subset of machine learning in which multilayered neural networks learn from vast amounts of data.

AI in Action: A Machine Learning Workflow

As we discussed above, artificial intelligence is able to sense, reason, and act, then adapt based on experience. But what does that look like? Here is a general workflow for machine learning:

  1. Data Acquisition—First, you need huge amounts of data. This data can be collected from any number of sources, including sensors in wearables and other objects, the cloud, and the Web.
     
  2. Data Aggregation and Curation—Once the data is collected, data scientists will aggregate and label it (in the case of supervised machine learning).
     
  3. Model Development—Next, the data is used to develop a model, which then gets trained for accuracy and optimized for performance.
     
  4. Model Deployment and Scoring—The model is deployed in an application, where it is used to make predictions based on new data.
     
  5. Update with New Data—As more data comes in, the model becomes even more refined and more accurate. For instance, as an autonomous car drives, the application pulls in real-time information through sensors, GPS, 360-degree video capture, and more, which it can then use to optimize future predictions.

Opportunities for AI Developers

One of most exciting things about AI is that it has the potential to revolutionize not just the computing industry, or the software industry, but really every industry that touches our lives. It will transform society in much the same way as the industrial revolution, the technical revolution, and the digital revolution altered every aspect of daily life. Intel provides the foundation, frameworks, and strategies to power artificial intelligence. And when it comes to deep learning and machine learning technologies, Intel can help developers deliver projects better, faster, and more cost-effectively.

For developers, the expansion of the AI field means that you have the potential to apply your interest and knowledge of AI toward an industry that you’re also interested in, like music or sports or healthcare. As you explore the world of AI, think about what else you find interesting, and how you’d like contribute to that field in a meaningful way. The ideas are limitless, but here are a few examples to get you thinking.
 

So, Where Should I Get Started? Intel Can Help.

Intel is supporting rapid innovation in artificial intelligence. The Intel Software Developer Zone for AI is a great starting point for finding community, tools, and training. Here are some specific links to get you started.

  • Join the AI Community– There is a robust community of AI developers worldwide. Connect with them on Facebook and LinkedIn, and look for Meetups, workshops, and events happening in your area. Intel regularly participates in conferences and puts on webinars about AI topics— learn more here.
     
  • Are you a student? See if the Intel® Software Student Developer Program is at your campus. Get hands-on training from industry experts, professors and professionals to build your skillset. Learn more here.
     
  • Optimized frameworks– Caffe* is one of the most popular community applications for image recognition, and Theano* is designed to help write models for deep learning. Both frameworks have been optimized for use with Intel® architecture. Learn how to install and use these frameworks, and find a number of useful libraries, here.
     
  • Hardware - Intel® Xeon Phi™ processor family– These massively multicore processors deliver powerful, highly parallel performance for machine learning and deep learning workloads. Get a brief overview of deep learning using Intel® architectures here, and learn more about the Intel® Xeon Phi™ processor family’s competitive performance for deep learning here.The topic of AI is incredibly deep, and we’ve only scratched the surface so far. Come back soon for more articles about what’s happening and how you can get involved.      

The topic of AI is incredibly deep, and we’ve only scratched the surface so far. Come back soon for more articles about what’s happening and how you can get involved.  


Innovative Media Solutions Showcase

$
0
0

New, Inventive Media Solutions Made Possible with Intel Media Software Tools

With Intel media software tools, video solutions providers can create inspiring, innovative products that capitalize on next gen capabilities like real-time 4K HEVC, virtual reality, simultaneous multi-camera streaming, high-dynamic range (HDR) content delivery, video security solutions with smart analytics, and more. Check these out. Envision using Intel's advanced media tools to transform your media, video, and broadcasting solutions for a competitive edge with high performance and efficiency, and room for higher profits, market growth, and more reach.

sportsfield

Amazing Video Solution Enables Game-changing Sports Calls

Slomo.tv innovated its videoReferee* systems, which provide instant high-quality video replays from up to 18 cameras direct to referee viewing systems. Referees can view video from 4 cameras simultaneously at different angles, in slow motion, or using zoom for objective, error-free gameplay analysis. The Kontinental Hockey League; basketball leagues in Korea, Russia, Lithuania; and Rio Olympics used videoReferee. Read More.

 

VR video

Immersive Experiences with Real-time 4K HEVC Streaming

See how Wowza, Rivet VR and Intel worked together to deliver a live-streamed 360-degree virtual-reality jazz concert at the legendary Blue Note Jazz Club in New York using hardware-assisted 4K video​. See just how: Video | Article

 

    Mobile Viewpoint Live Reporting Ronde of Norg

    Mobile Viewpoint Delivers HEVC HDR Live Broadcasting

    Mobile Viewpoint delivers live HEVC HDR broadcasting at the scenes of breaking news action. The company developed a mobile encoder running on 6th generation Intel® processors using the graphics-accelerated codec to create low power hardware-accelerated encoding and transmission, and optimized by Intel® Media Server Studio Pro Edition for HEVC compression and quality. The results: fast, high-quality, video broadcasting on-the-go so the world stays informed of fast-changing events. Read more.

     

    Sharp all-around security camera

    Sharp's Innovative Security Camera is built with Intel® Media Technologies

    With security concerns now part of everyday life, SHARP built an omnidirectional wireless, intelligent, digital surveillance camera for these needs. Built with an Intel® Celeron® processor (N3160), SHARP 12 megapixel image sensors, and by using the Intel® Media SDK for hardware-accelerated encoding, the QG- B20C camera can capture video in 4Kx3K resolution, provide all-around views, and includes intelligent automatic detection functions. Read more.

     

    Magix Video Pro XMAGIX's Video Editing Software Provides HEVC to Broad Users

    While elite video pros have access to high-powered video production applications with bells and whistles available mostly only to enterprise, MAGIX unveiled Video Pro X, a video editing software for semi-pro video production. Optimized with Intel Media Server Studio, Video Pro X provides HEVC encoding to prosumers and semi-pros to help alleviate a bandwidth-constrained internet where millions of videos are shared. Read more.

     

    Comprimato2

    JPEG2000 Codec Now Native for Intel Media Server Studio

    Comprimato worked with Intel to provide additional video encoding technology as part of Intel Media Server Studio through a software plug-in for high quality, low latency JPEG2000 encoding. This  powerful encoding option allows users to transcode JPEG2000 contained in IMF, AS02 or MXF OP1a files to distribution formats like AVC and HEVC, and enables software-defined processes of IP video streams in broadcast applications. By using Media Server Studio to access hardware-acceleration and programmable graphics in Intel GPUs, encoding can run fast and reduce latency, which is important in live broadcasting. Read more.

     

    SPB TV AG Showcases Innovative Mobile TV/On-demand Transcoder

    SPB TV AG innovated its single-platform Astra* transcoder, a pro solution for fast, high-quality processing of linear TV broadcast and on-demand video streams from a single head-end to any mobile, desktop or home device. The transcoder uses Intel® Core™ i7 processors with media accelerators and delivers high-density transcoding optimized by Intel Media Server Studio. “We are delighted that our collaboration with Intel ensures faster and high quality transcoding, making our new product performance remarkable,” said CEO of SPB TV AG Kirill Filippov. Read more.

     

    SURF Communications collaborates with Intel for NFV & WebRTC all-inclusive platforms

    SURF Communication Solutions announced SURF ORION-HMP* and SURF MOTION-HMP*. The SURF-HMP architecture delivers fast, high-quality media acceleration - facilitating up to 4K video resolutions and ultra-high capacity HD voice and video processing. The system runs on Intel® processors with integrated graphics and is optimized by Intel Media Server Studio. SURF-HMP is driven-by a powerful processing engine that supports all major video and voice codecs and protocols in use, and delivers a multitude of applications fot transcoding, conferencing/mixing, MRF, playout, recording, messaging, video surveillance, encryption and more. Read more.

     


    More about Intel Media Software Tools

    Intel Media Server Studio - Provides an Intel® Media SDK, runtimes, graphics drivers, media/audio codecs, and advanced performance and quality analysis tools to help video solution providers deliver fast, high-density media transcoding.

    Intel Media SDK - A cross-platform API for developing client and media applications for Windows*. Achieve fast video plaback, encode, processing, media format conversion, and video conferencing. Accelerate RAW video and image processing. Get audio decode/encode support.

    Accelerating Media Processing: Which Media Software Tool do I use? English | Chinese

     

    Transfer learning using neon

    $
    0
    0

    Introduction

    In the last few years plenty of deep neural net (DNN) models have been made available for a variety of applications such as classification, image recognition and speech translation. Typically, each of these models are designed for a very specific purpose, but can be extended to novel use cases. For example, one can train a model to recognize numbers and characters in an image and then reuse that model to read signposts in a broader model or a dataset used in autonomous driving.

    In this blog post we will:

    1. Explain transfer learning and some of its applications
    2. Explain how neon can be used for transfer learning
    3. Walk through example code that uses neon for transferring a pre-trained model to a new dataset
    4. Discuss the merits of transfer learning with some results

    Transfer Learning

    Consider the task of visual classification. Convolutional neural networks (CNN) are organized into several layers with each layer learning features at a different scale. The lower level layers recognize low level features such as the fur of a cat or the texture on a brick wall. Higher level layers recognize higher level features such as the body shape of a walking pedestrian or the configuration of windows in a car.

    Features learnt at various scales offer excellent feature vectors for various classification tasks. They fundamentally differ from feature vectors that are obtained by kernel-based algorithms developed by human operators because these feature vectors are learnt after extensive training runs. These training runs aim to systematically refine the model parameters so that the typical error between the predicted output, yp=f(xt) (where xt is the observed real world signal and f() is the model), and the ground truth, yt, is made as small as possible.

    There are several examples of reusing the features learnt by a well trained CNN. Oquab et. al.

    [1] show how the features of an AlexNet model trained on images with a single object can be used to recognize objects in more complex images taken in the real world. Szegedy et. al. [2] show that given a very deep neural network, the features learnt only by half the layers of the network can be used for visual classification. Bell et. al. [3] show that material features (such as wood, glass, etc.) learnt by various pre-trained CNNs such as AlexNet and GoogLeNet can be used for other tangential tasks such as image segmentation. The features learnt by a pre-trained network work so well because they capture the general statistics, the spatial coherence and the hierarchical meta relationships in the data.

    Transferring Learning with neon

    Neon not only excels in training and inference of DNNs, but also delivers a rich ecosystem with support for requirements surrounding DNNs. For example, you can serialize learned models, load pre- or partially-trained models, choose from several DNNs built by industry experts, and run it in the cloud without any physical infrastructure of your own. You can get a good overview of the neon API here.

    You can load pre-trained weights of a model and access them at a per-layer level with two lines of code as follows:

    from neon.util.persist import load_obj
    pre_trained_model = load_obj(filepath)
    pre_trained_layers = pre_trained_model['model']['config']['layers']

    You can then transfer the weights from these pre-learnt layer to a compatible layer in your own model with one line of code as follows:

    [code]layer_of_new_model.load_weights(pre_trained_layer, load_states=True)[/code]

    Then the task of transferring weights from a pre-learnt model to a few select layers of your model is straightforward:

    new_layers = [l for l in new_model.layers.layers]
    for i, layer in enumerate(new_layers):
        if load_pre_trained_weight(i, layer):
            layer.load_weights(pre_trained_layers[i], load_states=True)

    That’s it! You have selectively transferred your pre-trained model into neon. In the rest of this post, we will discuss: 1) how to structure your new model, 2) how to selectively write code and maximally reuse the neon framework and 3) how to quickly train your new model to very high accuracy in neon without having to go through an extensive new training exercise. We will discuss this in the context of implementing the work of Oquab et. al. [1].

    General Scene Classification using Weights Trained on Individual Objects

    ImageNet is a very popular dataset where the images of the training dataset are mostly representative of individual objects representing 1000 different classes. It is an excellent database for obtaining feature vectors representing individual objects. However, pictures taken in the real world tend to be much more complex with many instances of the objects captured in a single image at various scales. These scenes are further complicated by occlusions. This is illustrated in the below figure where you find many instances of people and cows at varying degrees of scale and occlusion.

    Classification in such images is typically done using two techniques: 1) using a sliding multiscale sampler which tries to classify small portions of the image and 2) selectively feeding region proposals discovered by more sophisticated algorithms that are then fed into the DNN for classification. An implementation of the latter approach using Fast R-CNN[4] can be found here. Fast R-CNN also uses transfer learning to accelerate its training speed. In this section we will discuss the former approach which is easier to implement. Our implementation can be found here. Our implementation trains on the Pascal VOC dataset using an AlexNet model that was pre-trained on the ImageNet dataset.

    The core structure of the implementation is simple:

    def main():
    
        # Collect the user arguments and hyper parameters
        args, hyper_params = get_args_and_hyperparameters()
    
        # setup the CPU or GPU backend
        be = gen_backend(**extract_valid_args(args, gen_backend))
    
        # load the training dataset. This will download the dataset
        # from the web and cache it locally for subsequent use.
        train_set = MultiscaleSampler('trainval', '2007', ...)
    
        # create the model by replacing the classification layer
        # of AlexNet with new adaptation layers
        model, opt = create_model( args, hyper_params)
    
        # Seed the Alexnet conv layers with pre-trained weights
        if args.model_file is None and hyper_params.use_pre_trained_weights:
            load_imagenet_weights(model, args.data_dir)
    
        train( args, hyper_params, model, opt, train_set)
    
        # Load the test dataset. This will download the dataset
        # from the web and cache it locally for subsequent use.
        test_set = MultiscaleSampler('test', '2007', ...)
        test( args, hyper_params, model, test_set)
    
        return

    Creating the Model

     

    The structure of our new neural net is the same as the pre-trained AlexNet except we replace its final classification layer with two affine layers and a dropout layer that serve to adapt the neural net trained to the labels of ImageNet to the new set of labels of the Pascal VOC dataset. With the simplicity of neon, that amounts to replacing this line of code (see create_model())

    # train for the 1000 labels of ImageNet
    Affine(nout=1000, init=Gaussian(scale=0.01),
           bias=Constant(-7), activation=Softmax())

    with these:

    Affine(nout=4096, init=Gaussian(scale=0.005),
           bias=Constant(.1), activation=Rectlin()),
    Dropout(keep=0.5),
    # train for the 21 labels of PascalVOC
    Affine(nout=21, init=Gaussian(scale=0.01),
           bias=Constant(0), activation=Softmax())

    Since we are already using a pre-trained model, we just need to do about 6-8 epochs of training. So we’ll use a small learning rate of 0.0001. Furthermore we will reduce that learning aggressively every few epochs and use a high momentum component because the pre-learned weights are already close to a local minima. These are all done as hyper parameter settings:

    if hyper_params.use_pre_trained_weights:
        # This will typically train in 5-10 epochs. Use a small learning rate
        # and quickly reduce every few epochs.
        s = 1e-4
        hyper_params.learning_rate_scale = s
        hyper_params.learning_rate_sched = Schedule(step_config=[15, 20],
                                                    change=[0.5*s, 0.1*s])
        hyper_params.momentum = 0.9
    else:
        # need to actively manage the learning rate if the
        # model is not pre-trained
        s = 1e-2
        hyper_params.learning_rate_scale = 1e-2
        hyper_params.learning_rate_sched = Schedule(
                                step_config=[8, 14, 18, 20],
                                change=[0.5*s, 0.1*s, 0.05*s, 0.01*s])
        hyper_params.momentum = 0.1

    These powerful hyper parameters are enforced with one line of code in create_model():

    opt = GradientDescentMomentum(hyper_params.learning_rate_scale,
                                  hyper_params.momentum, wdecay=0.0005,
                                  schedule=hyper_params.learning_rate_sched)

    Multiscale Sampler

    The 2007 Pascal VOC dataset supplies several rectangular regions of interest (ROI) per image with a label for each of the ROI. Neon ships with a loader of the Pascal VOC dataset. We’ll create a dataset loader by creating a class that derives from the PASCALVOCTrain class of that dataset.

    We will sample the input images at successively refined scales of [1., 1.3, 1.6, 2., 2.3, 2.6, 3.0, 3.3, 3.6, 4., 4.3, 4.6, 5.] and collect 448 patches. The sampling process at a given scale is simply (see compute_patches_at_scale()):

    size = (np.amin(shape)-1) / scale
    num_samples = np.ceil( (shape-1) / size)

    Since the patches are generated and not derived from the ground truth, we need to assign it a label. A patch is assigned the label of the ROI with which it significantly overlaps. The overlap criteria we choose is that at least 20% of a patch’s area needs to overlap with that of a ROI and at least 60% of that ROI’s area has to be covered by the overlap region. If no ROI or more than one ROI qualifies for this criteria for a given patch we label that patch as a background (see get_label_for_patch()). Typically, the background patches tend to dominate. During training we bias the sampling to carry more non-background patches (see resample_patches() ). All of the sampling is done dynamically within the __iter__() function of the MultiscaleSampler. This function is called when neon asks the dataset to supply the next mini-batch worth of data. The motivation behind this process is illustrated in figure 4 of [1]

    We use this patch sampling method for both training and inference. The MutiscaleSampler feeds neon a minibatch worth of input and label data while neon is not even aware that a meta form of multiscale learning is in progress. Since there are more patches per image than the minibatch size, a single image will feed multiple mini-batches during both training and inference. During training we simply use the CrossEntropyMulti cost function that ships with neon. During inference we leverage neon’s flexibility by defining our own cost function.

    Inference

    We do a multi-class classification during inference by predicting the presence or absence of a particular object label in the image. We do this on a per-class basis by skewing the class predictions with an exponent and accumulating this skewed value across all the patches inferred on the image. In other words, the score S(i,c) for a class c in image i is the sum of the individual patch scores P(j,c) for that class c, raised to an exponent.

    This is implemented by the ImageScores class and the score computation can be expressed with two lines of code (see __call__() ):

    exp = self.be.power(y, self.exponent)
    self.scores_batch[:] = self.be.add(exp, self.scores_batch)

    The intuition behind this scoring technique is illustrated in figures 5 and 6 of [1].

    Results

    Here are results on the test dataset. The prediction quality is measured with the Average Precision metric. The overall mean average precision (mAP) is 74.67. Those are good numbers for a fairly simple implementation. It took just 15 epochs of training as compared to the pre-trained model that needed more than 90 epochs of training. In addition, if you factor in the hyper-parameter optimization that went into the pre-trained model, we have a significant savings in compute.

    Classair planebikebirdboatbottlebuscarcatchaircow
    AP81.1779.3281.2174.8452.8974.5787.7278.5463.0069.57
    Classdining tabledoghorsemotorbikepersonplantsheepsofatraintv
    AP58.2874.0677.3879.9190.6969.0578.0259.5581.3282.26

    As expected, training converges much faster with a pre-trained model as illustrated in the below graph.

    Here are some helpful hints for running the example:

    1. Use this command to start a fresh new training run
      [code]./transfer_learning.py -e10 -r13 -b gpu –save_path model.prm –serialize 1 –history 20 > train.log 2>&1 &[/code]
    2. Use this command to run test. Make sure that the number of epochs specified in this command with the -e option is zero. That ensures that neon will skip the training and jump directly to testing.
      [code]./transfer_learning.py -e0 -r13 -b gpu –model_file model.prm > infer.log 2>&1 &[/code]
    3. Training each epoch can take 4-6 hours if you are training on the full 5000 images of the training dataset. If you had to terminate your training job for some reason, you can always restart from the last saved epoch with this command.
      [code]./transfer_learning.py -e10 -r13 -b gpu –save_path train.prm –serialize 1 –history 20 –model_file train.prm > train.log 2>&1 &[/code]

    The pre-trained model that we used can be found here.

    A fully trained model obtained after transfer learning can be found here.

    You can use the trained model to do classification on the Pascal VOC dataset using AlexNet.

    References

    [1] M. Oquab et. al. Learning and Transferring Mid-Level Image Representations using Convolutional Neural Networks, CVPR 2014.
    [2] C. Szegedy et. al. Rethinking the Inception Architecture for Computer Vision. 2015
    [3] S. Bell, P. Upchurch, N. Snavely, and K. Bala. Material Recognition in the Wild with the Materials in Context Database. CVPR 2015.
    [4] R. Girshick. Fast R-CNN. 2015.
     

    About the Author:

    Aravind Kalaiah is a tech lead experienced in building scalable distributed systems for real time query processing and high performance computing. He was a technology lead at NVIDIA where he was the founding engineer on the team that built the world’s first debugger for massively parallel processors. Aravind has previously founded revenue generating startups for enterprises and consumer markets in the domains of machine learning and computer vision.

     

    Introducing DNN primitives in Intel® Math Kernel Library

    $
    0
    0

        Deep Neural Networks (DNNs) are on the cutting edge of the Machine Learning domain. These algorithms received wide industry adoption in the late 1990s and were initially applied to tasks such as handwriting recognition on bank checks. Deep Neural Networks have been widely successful in this task, matching and even exceeding human capabilities. Today DNNs have been used for image recognition and video and natural language processing, as well as in solving complex visual understanding problems such as autonomous driving. DNNs are very demanding in terms of compute resources and the volume of data they must process. To put this into perspective, the modern image recognition topology AlexNet takes a few days to train on modern compute systems and uses slightly over 14 million images. Tackling this complexity requires well optimized building blocks to decrease the training time in order to meet the needs of the industrial application.

        Intel® Math Kernel Library (Intel® MKL) 2017 introduces the DNN domain, which includes functions necessary to accelerate the most popular image recognition topologies, including AlexNet, VGG, GoogleNet and ResNet.

        These DNN topologies rely on a number of standard building blocks, or primitives, that operate on data in the form of multidimensional sets called tensors. These primitives include convolution, normalization, activation and inner product functions along with functions necessary to manipulate tensors. Performing computations effectively on Intel architectures requires taking advantage of SIMD instructions via vectorization and of multiple compute cores via threading. Vectorization is extremely important as modern processors operate on vectors of data up to 512 bits long (16 single-precision numbers) and can perform up to two multiply and add (Fused Multiply Add, or FMA) operations per cycle. Taking advantage of vectorization requires data to be located consecutively in memory. As typical dimensions of a tensor are relatively small, changing the data layout introduces significant overhead; we strive to perform all the operations in a topology without changing the data layout from primitive to primitive.

    Intel MKL provides primitives for most widely used operations implemented for vectorization-friendly data layout:

    • Direct batched convolution
    • Inner product
    • Pooling: maximum, minimum, average
    • Normalization: local response normalization across channels (LRN), batch normalization
    • Activation: rectified linear unit (ReLU)
    • Data manipulation: multi-dimensional transposition (conversion), split, concat, sum and scale.

    Programming model

        Execution flow for the neural network topology includes two phases: setup and execution. During the setup phase the application creates descriptions of all DNN operations necessary to implement scoring, training, or other application-specific computations. To pass data from one DNN operation to the next one, some applications create intermediate conversions and allocate temporary arrays if the appropriate output and input data layouts do not match. This phase is performed once in a typical application and followed by multiple execution phases where actual computations happen.

        During the execution step the data is fed to the network in a plain layout like BCWH (batch, channel, width, height) and is converted to a SIMD-friendly layout. As data propagates between layers the data layout is preserved and conversions are made when it is necessary to perform operations that are not supported by the existing implementation.

     

        Intel MKL DNN primitives implement a plain C application programming interface (API) that can be used in the existing C/C++ DNN framework. An application that calls Intel MKL DNN functions should involve the following stages:

        Setup stage: for given a DNN topology, the application creates all DNN operations necessary to implement scoring, training, or other application-specific computations. To pass data from one DNN operation to the next one, some applications create intermediate conversions and allocate temporary arrays if the appropriate output and input data layouts do not match.

        Execution stage: at this stage, the application calls to the DNN primitives that apply the DNN operations, including necessary conversions, to the input, output, and temporary arrays.

        The appropriated examples for training and scoring computations may be find out into MKL package directory: <mklroot>\examples\dnnc\source 

    Performance

    Caffe, a deep learning framework developed by Berkeley Vision and Learning Center (BVLC), is one of the most popular community frameworks for image recognition. Together with AlexNet, a neural network topology for image recognition, and ImageNet, a database of labeled images, Caffe is often used as a benchmark. The chart below shows performance comparison of original Caffe implementation and Intel optimized version, that takes advantage of optimized matrix-matrix multiplication and new Intel MKL 2017 DNN primitives on Intel® Xeon® processor E5-2699 v4 (codename Broadwell) and Intel® Xeon Phi™ processor 7250 (codename Knights Landing).

    Summary

    DNN primitives available in Intel MKL 2017 can be used to accelerate Deep Learning workloads on Intel Architecture. Please refer to Intel MKL Developer Reference Manual and examples for detailed information.

     

     

    Intel® XDK FAQs - General

    $
    0
    0

    How can I get started with Intel XDK?

    There are plenty of videos and articles that you can go through here to get started. You could also start with some of our demo apps. It may also help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, which will help you understand some of the differences between developing for a traditional server-based environment and developing for the Intel XDK hybrid Cordova app environment.

    Having prior understanding of how to program using HTML, CSS and JavaScript* is crucial to using the Intel XDK. The Intel XDK is primarily a tool for visualizing, debugging and building an app package for distribution.

    You can do the following to access our demo apps:

    • Select Project tab
    • Select "Start a New Project"
    • Select "Samples and Demos"
    • Create a new project from a demo

    If you have specific questions following that, please post it to our forums.

    How do I convert my web app or web site into a mobile app?

    The Intel XDK creates Cordova mobile apps (aka PhoneGap apps). Cordova web apps are driven by HTML5 code (HTML, CSS and JavaScript). There is no web server in the mobile device to "serve" the HTML pages in your Cordova web app, the main program resources required by your Cordova web app are file-based, meaning all of your web app resources are located within the mobile app package and reside on the mobile device. Your app may also require resources from a server. In that case, you will need to connect with that server using AJAX or similar techniques, usually via a collection of RESTful APIs provided by that server. However, your app is not integrated into that server, the two entities are independent and separate.

    Many web developers believe they should be able to include PHP or Java code or other "server-based" code as an integral part of their Cordova app, just as they do in a "dynamic web app." This technique does not work in a Cordova web app, because your app does not reside on a server, there is no "backend"; your Cordova web app is a "front-end" HTML5 web app that runs independent of any servers. See the following articles for more information on how to move from writing "multi-page dynamic web apps" to "single-page Cordova web apps":

    Can I use an external editor for development in Intel® XDK?

    Yes, you can open your files and edit them in your favorite editor. However, note that you must use Brackets* to use the "Live Layout Editing" feature. Also, if you are using App Designer (the UI layout tool in Intel XDK) it will make many automatic changes to your index.html file, so it is best not to edit that file externally at the same time you have App Designer open.

    Some popular editors among our users include:

    • Sublime Text* (Refer to this article for information on the Intel XDK plugin for Sublime Text*)
    • Notepad++* for a lighweight editor
    • Jetbrains* editors (Webstorm*)
    • Vim* the editor

    How do I get code refactoring capability in Brackets* (the Intel XDK code editor)?

    ...to be written...

    Why doesn’t my app show up in Google* play for tablets?

    ...to be written...

    What is the global-settings.xdk file and how do I locate it?

    global-settings.xdk contains information about all your projects in the Intel XDK, along with many of the settings related to panels under each tab (Emulate, Debug etc). For example, you can set the emulator to auto-refresh or no-auto-refresh. Modify this file at your own risk and always keep a backup of the original!

    You can locate global-settings.xdk here:

    • Mac OS X*
      ~/Library/Application Support/XDK/global-settings.xdk
    • Microsoft Windows*
      %LocalAppData%\XDK
    • Linux*
      ~/.config/XDK/global-settings.xdk

    If you are having trouble locating this file, you can search for it on your system using something like the following:

    • Windows:
      > cd /
      > dir /s global-settings.xdk
    • Mac and Linux:
      $ sudo find / -name global-settings.xdk

    When do I use the intelxdk.js, xhr.js and cordova.js libraries?

    The intelxdk.js and xhr.js libraries were only required for use with the Intel XDK legacy build tiles (which have been retired). The cordova.js library is needed for all Cordova builds. When building with the Cordova tiles, any references to intelxdk.js and xhr.js libraries in your index.html file are ignored.

    How do I get my Android (and Crosswalk) keystore file?

    New with release 3088 of the Intel XDK, you may now download your build certificates (aka keystore) using the new certificate manager that is built into the Intel XDK. Please read the initial paragraphs of Managing Certificates for your Intel XDK Account and the section titled "Convert a Legacy Android Certificate" in that document, for details regarding how to do this.

    It may also help to review this short, quick overview video (there is no audio) that shows how you convert your existing "legacy" certificates to the "new" format that allows you to directly manage your certificates using the certificate management tool that is built into the Intel XDK. This conversion process is done only once.

    If the above fails, please send an email to html5tools@intel.com requesting help. It is important that you send that email from the email address associated with your Intel XDK account.

    How do I rename my project that is a duplicate of an existing project?

    See this FAQ: How do I make a copy of an existing Intel XDK project?

    How do I recover when the Intel XDK hangs or won't start?

    • If you are running Intel XDK on Windows* it must be Windows* 7 or higher. It will not run reliably on earlier versions.
    • Delete the "project-name.xdk" file from the project directory that Intel XDK is trying to open when it starts (it will try to open the project that was open during your last session), then try starting Intel XDK. You will have to "import" your project into Intel XDK again. Importing merely creates the "project-name.xdk" file in your project directory and adds that project to the "global-settings.xdk" file.
    • Rename the project directory Intel XDK is trying to open when it starts. Create a new project based on one of the demo apps. Test Intel XDK using that demo app. If everything works, restart Intel XDK and try it again. If it still works, rename your problem project folder back to its original name and open Intel XDK again (it should now open the sample project you previously opened). You may have to re-select your problem project (Intel XDK should have forgotten that project during the previous session).
    • Clear Intel XDK's program cache directories and files.

      On a Windows machine this can be done using the following on a standard command prompt (administrator is not required):

      > cd %AppData%\..\Local\XDK
      > del *.* /s/q

      To locate the "XDK cache" directory on [OS X*] and [Linux*] systems, do the following:

      $ sudo find / -name global-settings.xdk
      $ cd <dir found above>
      $ sudo rm -rf *

      You might want to save a copy of the "global-settings.xdk" file before you delete that cache directory and copy it back before you restart Intel XDK. Doing so will save you the effort of rebuilding your list of projects. Please refer to this question for information on how to locate the global-settings.xdk file.
    • If you save the "global-settings.xdk" file and restored it in the step above and you're still having hang troubles, try deleting the directories and files above, along with the "global-settings.xdk" file and try it again.
    • Do not store your project directories on a network share (Intel XDK currently has issues with network shares that have not yet been resolved). This includes folders shared between a Virtual machine (VM) guest and its host machine (for example, if you are running Windows* in a VM running on a Mac* host). This network share issue is a known issue with a fix request in place.
    • There have also been issues with running behind a corporate network proxy or firewall. To check them try running Intel XDK from your home network where, presumably, you have a simple NAT router and no proxy or firewall. If things work correctly there then your corporate firewall or proxy may be the source of the problem.
    • Issues with Intel XDK account logins can also cause Intel XDK to hang. To confirm that your login is working correctly, go to the Intel XDK App Center and confirm that you can login with your Intel XDK account. While you are there you might also try deleting the offending project(s) from the App Center.

    If you can reliably reproduce the problem, please send us a copy of the "xdk.log" file that is stored in the same directory as the "global-settings.xdk" file to html5tools@intel.com.

    Is Intel XDK an open source project? How can I contribute to the Intel XDK community?

    No, It is not an open source project. However, it utilizes many open source components that are then assembled into Intel XDK. While you cannot contribute directly to the Intel XDK integration effort, you can contribute to the many open source components that make up Intel XDK.

    The following open source components are the major elements that are being used by Intel XDK:

    • Node-Webkit
    • Chromium
    • Ripple* emulator
    • Brackets* editor
    • Weinre* remote debugger
    • Crosswalk*
    • Cordova*
    • App Framework*

    How do I configure Intel XDK to use 9 patch png for Android* apps splash screen?

    Intel XDK does support the use of 9 patch png for Android* apps splash screen. You can read up more at https://software.intel.com/en-us/xdk/articles/android-splash-screens-using-nine-patch-png on how to create a 9 patch png image and link to an Intel XDK sample using 9 patch png images.

    How do I stop AVG from popping up the "General Behavioral Detection" window when Intel XDK is launched?

    You can try adding nw.exe as the app that needs an exception in AVG.

    What do I specify for "App ID" in Intel XDK under Build Settings?

    Your app ID uniquely identifies your app. For example, it can be used to identify your app within Apple’s application services allowing you to use things like in-app purchasing and push notifications.

    Here are some useful articles on how to create an App ID:

    Is it possible to modify the Android Manifest or iOS plist file with the Intel XDK?

    You cannot modify the AndroidManifest.xml file directly with our build system, as it only exists in the cloud. However, you may do so by creating a dummy plugin that only contains a plugin.xml file containing directives that can be used to add lines to the AndroidManifest.xml file during the build process. In essence, you add lines to the AndroidManifest.xml file via a local plugin.xml file. Here is an example of a plugin that does just that:

    <?xml version="1.0" encoding="UTF-8"?><plugin xmlns="http://apache.org/cordova/ns/plugins/1.0" id="my-custom-intents-plugin" version="1.0.0"><name>My Custom Intents Plugin</name><description>Add Intents to the AndroidManifest.xml</description><license>MIT</license><engines><engine name="cordova" version=">=3.0.0" /></engines><!-- android --><platform name="android"><config-file target="AndroidManifest.xml" parent="/manifest/application"><activity android:configChanges="orientation|keyboardHidden|keyboard|screenSize|locale" android:label="@string/app_name" android:launchMode="singleTop" android:name="testa" android:theme="@android:style/Theme.Black.NoTitleBar"><intent-filter><action android:name="android.intent.action.SEND" /><category android:name="android.intent.category.DEFAULT" /><data android:mimeType="*/*" /></intent-filter></activity></config-file></platform></plugin>

    You can inspect the AndroidManifest.xml created in an APK, using apktool with the following command line:

    $ apktool d my-app.apk
    $ cd my-app
    $ more AndroidManifest.xml

    This technique exploits the config-file element that is described in the Cordova Plugin Specification docs and can also be used to add lines to iOS plist files. See the Cordova plugin documentation link for additional details.

    Here is an example of such a plugin for modifying the iOS plist file, specifically for adding a BIS key to the plist file:

    <?xml version="1.0" encoding="UTF-8"?><plugin
        xmlns="http://apache.org/cordova/ns/plugins/1.0"
        id="my-custom-bis-plugin"
        version="0.0.2"><name>My Custom BIS Plugin</name><description>Add BIS info to iOS plist file.</description><license>BSD-3</license><preference name="BIS_KEY" /><engines><engine name="cordova" version=">=3.0.0" /></engines><!-- ios --><platform name="ios"><config-file target="*-Info.plist" parent="CFBundleURLTypes"><array><dict><key>ITSAppUsesNonExemptEncryption</key><true/><key>ITSEncryptionExportComplianceCode</key><string>$BIS_KEY</string></dict></array></config-file></platform></plugin>

    And see this forum thread > https://software.intel.com/en-us/forums/intel-xdk/topic/680309< for an example of how to customize the OneSignal plugin's notification sound, in an Android app, by way of using a simple custom Cordova plugin. The same technique can be applied to adding custom icons and other assets to your project.

    How can I share my Intel XDK app build?

    You can send a link to your project via an email invite from your project settings page. However, a login to your account is required to access the file behind the link. Alternatively, you can download the build from the build page, onto your workstation, and push that built image to some location from which you can send a link to that image.

    Why does my iOS build fail when I am able to test it successfully on a device and the emulator?

    Common reasons include:

    • Your App ID specified in the project settings do not match the one you specified in Apple's developer portal.
    • The provisioning profile does not match the cert you uploaded. Double check with Apple's developer site that you are using the correct and current distribution cert and that the provisioning profile is still active. Download the provisioning profile again and add it to your project to confirm.
    • In Project Build Settings, your App Name is invalid. It should be modified to include only alpha, space and numbers.

    How do I add multiple domains in Domain Access?

    Here is the primary doc source for that feature.

    If you need to insert multiple domain references, then you will need to add the extra references in the intelxdk.config.additions.xml file. This StackOverflow entry provides a basic idea and you can see the intelxdk.config.*.xml files that are automatically generated with each build for the <access origin="xxx" /> line that is generated based on what you provide in the "Domain Access" field of the "Build Settings" panel on the Project Tab.

    How do I build more than one app using the same Apple developer account?

    On Apple developer, create a distribution certificate using the "iOS* Certificate Signing Request" key downloaded from Intel XDK Build tab only for the first app. For subsequent apps, reuse the same certificate and import this certificate into the Build tab like you usually would.

    How do I include search and spotlight icons as part of my app?

    Please refer to this article in the Intel XDK documentation. Create anintelxdk.config.additions.xml file in your top level directory (same location as the otherintelxdk.*.config.xml files) and add the following lines for supporting icons in Settings and other areas in iOS*.

    <!-- Spotlight Icon --><icon platform="ios" src="res/ios/icon-40.png" width="40" height="40" /><icon platform="ios" src="res/ios/icon-40@2x.png" width="80" height="80" /><icon platform="ios" src="res/ios/icon-40@3x.png" width="120" height="120" /><!-- iPhone Spotlight and Settings Icon --><icon platform="ios" src="res/ios/icon-small.png" width="29" height="29" /><icon platform="ios" src="res/ios/icon-small@2x.png" width="58" height="58" /><icon platform="ios" src="res/ios/icon-small@3x.png" width="87" height="87" /><!-- iPad Spotlight and Settings Icon --><icon platform="ios" src="res/ios/icon-50.png" width="50" height="50" /><icon platform="ios" src="res/ios/icon-50@2x.png" width="100" height="100" />

    For more information related to these configurations, visit http://cordova.apache.org/docs/en/3.5.0/config_ref_images.md.html#Icons%20and%20Splash%20Screens.

    For accurate information related to iOS icon sizes, visit https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/MobileHIG/IconMatrix.html

    NOTE: The iPhone 6 icons will only be available if iOS* 7 or 8 is the target.

    Cordova iOS* 8 support JIRA tracker: https://issues.apache.org/jira/browse/CB-7043

    Does Intel XDK support Modbus TCP communication?

    No, since Modbus is a specialized protocol, you need to write either some JavaScript* or native code (in the form of a plugin) to handle the Modbus transactions and protocol.

    How do I sign an Android* app using an existing keystore?

    New with release 3088 of the Intel XDK, you may now import your existing keystore into Intel XDK using the new certificate manager that is built into the Intel XDK. Please read the initial paragraphs of Managing Certificates for your Intel XDK Account and the section titled "Import an Android Certificate Keystore" in that document, for details regarding how to do this.

    If the above fails, please send an email to html5tools@intel.com requesting help. It is important that you send that email from the email address associated with your Intel XDK account.

    How do I build separately for different Android* versions?

    Under the Projects Panel, you can select the Target Android* version under the Build Settings collapsible panel. You can change this value and build your application multiple times to create numerous versions of your application that are targeted for multiple versions of Android*.

    How do I display the 'Build App Now' button if my display language is not English?

    If your display language is not English and the 'Build App Now' button is proving to be troublesome, you may change your display language to English which can be downloaded by a Windows* update. Once you have installed the English language, proceed to Control Panel > Clock, Language and Region > Region and Language > Change Display Language.

    How do I update my Intel XDK version?

    When an Intel XDK update is available, an Update Version dialog box lets you download the update. After the download completes, a similar dialog lets you install it. If you did not download or install an update when prompted (or on older versions), click the package icon next to the orange (?) icon in the upper-right to download or install the update. The installation removes the previous Intel XDK version.

    How do I import my existing HTML5 app into the Intel XDK?

    If your project contains an Intel XDK project file (<project-name>.xdk) you should use the "Open an Intel XDK Project" option located at the bottom of the Projects List on the Projects tab (lower left of the screen, round green "eject" icon, on the Projects tab). This would be the case if you copied an existing Intel XDK project from another system or used a tool that exported a complete Intel XDK project.

    If your project does not contain an Intel XDK project file (<project-name>.xdk) you must "import" your code into a new Intel XDK project. To import your project, use the "Start a New Project" option located at the bottom of the Projects List on the Projects tab (lower left of the screen, round blue "plus" icon, on theProjects tab). This will open the "Samples, Demos and Templates" page, which includes an option to "Import Your HTML5 Code Base." Point to the root directory of your project. The Intel XDK will attempt to locate a file named index.html in your project and will set the "Source Directory" on the Projects tab to point to the directory that contains this file.

    If your imported project did not contain an index.html file, your project may be unstable. In that case, it is best to delete the imported project from the Intel XDK Projects tab ("x" icon in the upper right corner of the screen), rename your "root" or "main" html file to index.html and import the project again. Several components in the Intel XDK depend on this assumption that the main HTML file in your project is named index.hmtl. See Introducing Intel® XDK Development Tools for more details.

    It is highly recommended that your "source directory" be located as a sub-directory inside your "project directory." This insures that non-source files are not included as part of your build package when building your application. If the "source directory" and "project directory" are the same it results in longer upload times to the build server and unnecessarily large application executable files returned by the build system. See the following images for the recommended project file layout.

    I am unable to login to App Preview with my Intel XDK password.

    On some devices you may have trouble entering your Intel XDK login password directly on the device in the App Preview login screen. In particular, sometimes you may have trouble with the first one or two letters getting lost when entering your password.

    Try the following if you are having such difficulties:

    • Reset your password, using the Intel XDK, to something short and simple.

    • Confirm that this new short and simple password works with the XDK (logout and login to the Intel XDK).

    • Confirm that this new password works with the Intel Developer Zone login.

    • Make sure you have the most recent version of Intel App Preview installed on your devices. Go to the store on each device to confirm you have the most recent copy of App Preview installed.

    • Try logging into Intel App Preview on each device with this short and simple password. Check the "show password" box so you can see your password as you type it.

    If the above works, it confirms that you can log into your Intel XDK account from App Preview (because App Preview and the Intel XDK go to the same place to authenticate your login). When the above works, you can go back to the Intel XDK and reset your password to something else, if you do not like the short and simple password you used for the test.

    If you are having trouble logging into any pages on the Intel web site (including the Intel XDK forum), please see the Intel Sign In FAQ for suggestions and contact info. That login system is the backend for the Intel XDK login screen.

    How do I completely uninstall the Intel XDK from my system?

    Take the following steps to completely uninstall the XDK from your Windows system:

    • From the Windows Control Panel, remove the Intel XDK, using the Windows uninstall tool.

    • Then:
      > cd %LocalAppData%\Intel\XDK
      > del *.* /s/q

    • Then:
      > cd %LocalAppData%\XDK
      > copy global-settings.xdk %UserProfile%
      > del *.* /s/q
      > copy %UserProfile%\global-settings.xdk .

    • Then:
      -- Goto xdk.intel.com and select the download link.
      -- Download and install the new XDK.

    To do the same on a Linux or Mac system:

    • On a Linux machine, run the uninstall script, typically /opt/intel/XDK/uninstall.sh.
       
    • Remove the directory into which the Intel XDK was installed.
      -- Typically /opt/intel or your home (~) directory on a Linux machine.
      -- Typically in the /Applications/Intel XDK.app directory on a Mac.
       
    • Then:
      $ find ~ -name global-settings.xdk
      $ cd <result-from-above> (for example ~/Library/Application Support/XDK/ on a Mac)
      $ cp global-settings.xdk ~
      $ rm -Rf *
      $ mv ~/global-settings.xdk .

       
    • Then:
      -- Goto xdk.intel.com and select the download link.
      -- Download and install the new XDK.

    Is there a tool that can help me highlight syntax issues in Intel XDK?

    Yes, you can use the various linting tools that can be added to the Brackets editor to review any syntax issues in your HTML, CSS and JS files. Go to the "File > Extension Manager..." menu item and add the following extensions: JSHint, CSSLint, HTMLHint, XLint for Intel XDK. Then, review your source files by monitoring the small yellow triangle at the bottom of the edit window (a green check mark indicates no issues).

    How do I delete built apps and test apps from the Intel XDK build servers?

    You can manage them by logging into: https://appcenter.html5tools-software.intel.com/csd/controlpanel.aspx. This functionality will eventually be available within Intel XDK after which access to app center will be removed.

    I need help with the App Security API plugin; where do I find it?

    Visit the primary documentation book for the App Security API and see this forum post for some additional details.

    When I install my app or use the Debug tab Avast antivirus flags a possible virus, why?

    If you are receiving a "Suspicious file detected - APK:CloudRep [Susp]" message from Avast anti-virus installed on your Android device it is due to the fact that you are side-loading the app (or the Intel XDK Debug modules) onto your device (using a download link after building or by using the Debug tab to debug your app), or your app has been installed from an "untrusted" Android store. See the following official explanation from Avast:

    Your application was flagged by our cloud reputation system. "Cloud rep" is a new feature of Avast Mobile Security, which flags apks when the following conditions are met:

    1. The file is not prevalent enough; meaning not enough users of Avast Mobile Security have installed your APK.
    2. The source is not an established market (Google Play is an example of an established market).

    If you distribute your app using Google Play (or any other trusted market) your users should not see any warning from Avast.

    Following are some of the Avast anti-virus notification screens you might see on your device. All of these are perfectly normal, they are due to the fact that you must enable the installation of "non-market" apps in order to use your device for debug and the App IDs associated with your never published app or the custom debug modules that the Debug tab in the Intel XDK builds and installs on your device will not be found in a "established" (aka "trusted") market, such as Google Play.

    If you choose to ignore the "Suspicious app activity!" threat you will not receive a threat for that debug module any longer. It will show up in the Avast 'ignored issues' list. Updates to an existing, ignored, custom debug module should continue to be ignored by Avast. However, new custom debug modules (due to a new project App ID or a new version of Crosswalk selected in your project's Build Settings) will result in a new warning from the Avast anti-virus tool.

      

      

    How do I add a Brackets extension to the editor that is part of the Intel XDK?

    The number of Brackets extensions that are provided in the built-in edition of the Brackets editor are limited to insure stability of the Intel XDK product. Not all extensions are compatible with the edition of Brackets that is embedded within the Intel XDK. Adding incompatible extensions can cause the Intel XDK to quit working.

    Despite this warning, there are useful extensions that have not been included in the editor and which can be added to the Intel XDK. Adding them is temporary, each time you update the Intel XDK (or if you reinstall the Intel XDK) you will have to "re-add" your Brackets extension. To add a Brackets extension, use the following procedure:

    • exit the Intel XDK
    • download a ZIP file of the extension you wish to add
    • on Windows, unzip the extension here:
      %LocalAppData%\Intel\XDK\xdk\brackets\b\extensions\dev
    • on Mac OS X, unzip the extension here:
      /Applications/Intel\ XDK.app/Contents/Resources/app.nw/brackets/b/extensions/dev
    • start the Intel XDK

    Note that the locations given above are subject to change with new releases of the Intel XDK.

    Why does my app or game require so many permissions on Android when built with the Intel XDK?

    When you build your HTML5 app using the Intel XDK for Android or Android-Crosswalk you are creating a Cordova app. It may seem like you're not building a Cordova app, but you are. In order to package your app so it can be distributed via an Android store and installed on an Android device, it needs to be built as a hybrid app. The Intel XDK uses Cordova to create that hybrid app.

    A pure Cordova app requires the NETWORK permission, it's needed to "jump" between your HTML5 environment and the native Android environment. Additional permissions will be added by any Cordova plugins you include with your application; which permissions are includes are a function of what that plugin does and requires.

    Crosswalk for Android builds also require the NETWORK permission, because the Crosswalk image built by the Intel XDK includes support for Cordova. In addition, current versions of Crosswalk (12 and 14 at the time this FAQ was written)also require NETWORK STATE and WIFI STATE. There is an extra permission in some versions of Crosswalk (WRITE EXTERNAL STORAGE) that is only needed by the shared model library of Crosswalk, we have asked the Crosswalk project to remove this permission in a future Crosswalk version.

    If you are seeing more than the following five permissions in your XDK-built Crosswalk app:

    • android.permission.INTERNET
    • android.permission.ACCESS_NETWORK_STATE
    • android.permission.ACCESS_WIFI_STATE
    • android.permission.INTERNET
    • android.permission.WRITE_EXTERNAL_STORAGE

    then you are seeing permissions that have been added by some plugins. Each plugin is different, so there is no hard rule of thumb. The two "default" core Cordova plugins that are added by the Intel XDK blank templates (device and splash screen) do not require any Android permissions.

    BTW: the permission list above comes from a Crosswalk 14 build. Crosswalk 12 builds do not included the last permission; it was added when the Crosswalk project introduced the shared model library option, which started with Crosswalk 13 (the Intel XDK does not support 13 builds).

    How do I make a copy of an existing Intel XDK project?

    If you just need to make a backup copy of an existing project, and do not plan to open that backup copy as a project in the Intel XDK, do the following:

    • Exit the Intel XDK.
    • Copy the entire project directory:
      • on Windows, use File Explorer to "right-click" and "copy" your project directory, then "right-click" and "paste"
      • on Mac use Finder to "right-click" and then "duplicate" your project directory
      • on Linux, open a terminal window, "cd" to the folder that contains your project, and type "cp -a old-project/ new-project/" at the terminal prompt (where "old-project/" is the folder name of your existing project that you want to copy and "new-project/" is the name of the new folder that will contain a copy of your existing project)

    If you want to use an existing project as the starting point of a new project in the Intel XDK. The process described below will insure that the build system does not confuse the ID in your old project with that stored in your new project. If you do not follow the procedure below you will have multiple projects using the same project ID (a special GUID that is stored inside the Intel XDK <project-name>.xdk file in the root directory of your project). Each project in your account must have a unique project ID.

    • Exit the Intel XDK.
    • Make a copy of your existing project using the process described above.
    • Inside the new project that you made (that is, your new copy of your old project), make copies of the <project-name>.xdk file and <project-name>.xdke files and rename those copies to something like project-new.xdk and project-new.xdke (anything you like, just something different than the original project name, preferably the same name as the new project folder in which you are making this new project).
    • Using a TEXT EDITOR (only) (such as Notepad or Sublime or Brackets or some other TEXT editor), open your new "project-new.xdk" file (whatever you named it) and find the projectGuid line, it will look something like this:
      "projectGuid": "a863c382-ca05-4aa4-8601-375f9f209b67",
    • Change the "GUID" to all zeroes, like this: "00000000-0000-0000-000000000000"
    • Save the modified "project-new.xdk" file.
    • Open the Intel XDK.
    • Goto the Projects tab.
    • Select "Open an Intel XDK Project" (the green button at the bottom left of the Projects tab).
    • To open this new project, locate the new "project-new.xdk" file inside the new project folder you copied above.
    • Don't forget to change the App ID in your new project. This is necessary to avoid conflicts with the project you copied from, in the store and when side-loading onto a device.

    My project does not include a www folder. How do I fix it so it includes a www or source directory?

    The Intel XDK HTML5 and Cordova project file structures are meant to mimic a standard Cordova project. In a Cordova (or PhoneGap) project there is a subdirectory (or folder) named www that contains all of the HTML5 source code and asset files that make up your application. For best results, it is advised that you follow this convention, of putting your source inside a "source directory" inside of your project folder.

    This most commonly happens as the result of exporting a project from an external tool, such as Construct2, or as the result of importing an existing HTML5 web app that you are converting into a hybrid mobile application (eg., an Intel XDK Corodova app). If you would like to convert an existing Intel XDK project into this format, follow the steps below:

    • Exit the Intel XDK.
    • Copy the entire project directory:
      • on Windows, use File Explorer to "right-click" and "copy" your project directory, then "right-click" and "paste"
      • on Mac use Finder to "right-click" and then "duplicate" your project directory
      • on Linux, open a terminal window, "cd" to the folder that contains your project, and type "cp -a old-project/ new-project/" at the terminal prompt (where "old-project/" is the folder name of your existing project that you want to copy and "new-project/" is the name of the new folder that will contain a copy of your existing project)
    • Create a "www" directory inside the new duplicate project you just created above.
    • Move your index.html and other source and asset files to the "www" directory you just created -- this is now your "source" directory, located inside your "project" directory (do not move the <project-name>.xdk and xdke files and any intelxdk.config.*.xml files, those must stay in the root of the project directory)
    • Inside the new project that you made above (by making a copy of the old project), rename the <project-name>.xdk file and <project-name>.xdke files to something like project-copy.xdk and project-copy.xdke (anything you like, just something different than the original project, preferably the same name as the new project folder in which you are making this new project).
    • Using a TEXT EDITOR (only) (such as Notepad or Sublime or Brackets or some other TEXT editor), open the new "project-copy.xdk" file (whatever you named it) and find the line named projectGuid, it will look something like this:
      "projectGuid": "a863c382-ca05-4aa4-8601-375f9f209b67",
    • Change the "GUID" to all zeroes, like this: "00000000-0000-0000-000000000000"
    • A few lines down find: "sourceDirectory": "",
    • Change it to this: "sourceDirectory": "www",
    • Save the modified "project-copy.xdk" file.
    • Open the Intel XDK.
    • Goto the Projects tab.
    • Select "Open an Intel XDK Project" (the green button at the bottom left of the Projects tab).
    • To open this new project, locate the new "project-copy.xdk" file inside the new project folder you copied above.

    Can I install more than one copy of the Intel XDK onto my development system?

    Yes, you can install more than one version onto your development system. However, you cannot run multiple instances of the Intel XDK at the same time. Be aware that new releases sometimes change the project file format, so it is a good idea, in these cases, to make a copy of your project if you need to experiment with a different version of the Intel XDK. See the instructions in a FAQ entry above regarding how to make a copy of your Intel XDK project.

    Follow the instructions in this forum post to install more than one copy of the Intel XDK onto your development system.

    On Apple OS X* and Linux* systems, does the Intel XDK need the OpenSSL* library installed?

    Yes. Several features of the Intel XDK require the OpenSSL library, which typically comes pre-installed on Linux and OS X systems. If the Intel XDK reports that it could not find libssl, go to https://www.openssl.org to download and install it.

    I have a web application that I would like to distribute in app stores without major modifications. Is this possible using the Intel XDK?

    Yes, if you have a true web app or “client app” that only uses HTML, CSS and JavaScript, it is usually not too difficult to convert it to a Cordova hybrid application (this is what the Intel XDK builds when you create an HTML5 app). If you rely heavily on PHP or other server scripting languages embedded in your pages you will have more work to do. Because your Cordova app is not associated with a server, you cannot rely on server-based programming techniques; instead, you must rewrite any such code to user RESTful APIs that your app interacts with using, for example, AJAX calls.

    What is the best training approach to using the Intel XDK for a newbie?

    First, become well-versed in the art of client web apps, apps that rely only on HTML, CSS and JavaScript and utilize RESTful APIs to talk to network services. With that you will have mastered 80% of the problem. After that, it is simply a matter of understanding how Cordova plugins are able to extend the JavaScript API for access to features of the platform. For HTML5 training there are many sites providing tutorials. It may also help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, which will help you understand some of the differences between developing for a traditional server-based environment and developing for the Intel XDK hybrid Cordova app environment.

    What is the best platform to start building an app with the Intel XDK? And what are the important differences between the Android, iOS and other mobile platforms?

    There is no one most important difference between the Android, iOS and other platforms. It is important to understand that the HTML5 runtime engine that executes your app on each platform will vary as a function of the platform. Just as there are differences between Chrome and Firefox and Safari and Internet Explorer, there are differences between iOS 9 and iOS 8 and Android 4 and Android 5, etc. Android has the most significant differences between vendors and versions of Android. This is one of the reasons the Intel XDK offers the Crosswalk for Android build option, to normalize and update the Android issues.

    In general, if you can get your app working well on Android (or Crosswalk for Android) first you will generally have fewer issues to deal with when you start to work on the iOS and Windows platforms. In addition, the Android platform has the most flexible and useful debug options available, so it is the easiest platform to use for debugging and testing your app.

    Is my password encrypted and why is it limited to fifteen characters?

    Yes, your password is stored encrypted and is managed by https://signin.intel.com. Your Intel XDK userid and password can also be used to log into the Intel XDK forum as well as the Intel Developer Zone. the Intel XDK does not store nor does it manage your userid and password.

    The rules regarding allowed userids and passwords are answered on this Sign In FAQ page, where you can also find help on recovering and changing your password.

    Why does the Intel XDK take a long time to start on Linux or Mac?

    ...and why am I getting this error message? "Attempt to contact authentication server is taking a long time. You can wait, or check your network connection and try again."

    At startup, the Intel XDK attempts to automatically determine the proxy settings for your machine. Unfortunately, on some system configurations it is unable to reliably detect your system proxy settings. As an example, you might see something like this image when starting the Intel XDK.

    On some systems you can get around this problem by setting some proxy environment variables and then starting the Intel XDK from a command-line that includes those configured environment variables. To set those environment variables, similar to the following:

    $ export no_proxy="localhost,127.0.0.1/8,::1"
    $ export NO_PROXY="localhost,127.0.0.1/8,::1"
    $ export http_proxy=http://proxy.mydomain.com:123/
    $ export HTTP_PROXY=http://proxy.mydomain.com:123/
    $ export https_proxy=http://proxy.mydomain.com:123/
    $ export HTTPS_PROXY=http://proxy.mydomain.com:123/

    IMPORTANT! The name of your proxy server and the port (or ports) that your proxy server requires will be different than those shown in the example above. Please consult with your IT department to find out what values are appropriate for your site. Intel has no way of knowing what configuration is appropriate for your network.

    If you use the Intel XDK in multiple locations (at work and at home), you may have to change the proxy settings before starting the Intel XDK after switching to a new network location. For example, many work networks use a proxy server, but most home networks do not require such a configuration. In that case, you need to be sure to "unset" the proxy environment variables before starting the Intel XDK on a non-proxy network.

    After you have successfully configured your proxy environment variables, you can start the Intel XDK manually, from the command-line.

    On a Mac, where the Intel XDK is installed in the default location, type the following (from a terminal window that has the above environment variables set):

    $ open /Applications/Intel\ XDK.app/

    On a Linux machine, assuming the Intel XDK has been installed in the ~/intel/XDK directory, type the following (from a terminal window that has the above environment variables set):

    $ ~/intel/XDK/xdk.sh &

    In the Linux case, you will need to adjust the directory name that points to the xdk.sh file in order to start. The example above assumes a local install into the ~/intel/XDK directory. Since Linux installations have more options regarding the installation directory, you will need to adjust the above to suit your particular system and install directory.

    How do I generate a P12 file on a Windows machine?

    See these articles:

    How do I change the default dir for creating new projects in the Intel XDK?

    You can change the default new project location manually by modifying a field in the global-settings.xdk file. Locate the global-settings.xdk file on your system (the precise location varies as a function of the OS) and find this JSON object inside that file:

    "projects-tab": {"defaultPath": "/Users/paul/Documents/XDK","LastSortType": "descending|Name","lastSortType": "descending|Opened","thirdPartyDisclaimerAcked": true
      },

    The example above came from a Mac. On a Mac the global-settings.xdk file is located in the "~/Library/Application Support/XDK" directory.

    On a Windows machine the global-settings.xdk file is normally found in the "%LocalAppData%\XDK" directory. The part you are looking for will look something like this:

    "projects-tab": {"thirdPartyDisclaimerAcked": false,"LastSortType": "descending|Name","lastSortType": "descending|Opened","defaultPath": "C:\\Users\\paul/Documents"
      },

    Obviously, it's the defaultPath part you want to change.

    BE CAREFUL WHEN YOU EDIT THE GLOBAL-SETTINGS.XDK FILE!! You've been warned...

    Make sure the result is proper JSON when you are done, or it may cause your XDK to cough and hack loudly. Make a backup copy of global-settings.xdk before you start, just in case.

    Where I can find recent and upcoming webinars list?

    How can I change the email address associated with my Intel XDK login?

    Login to the Intel Developer Zone with your Intel XDK account userid and password and then locate your "account dashboard." Click the "pencil icon" next to your name to open the "Personal Profile" section of your account, where you can edit your "Name & Contact Info," including the email address associated with your account, under the "Private" section of your profile.

    What network addresses must I enable in my firewall to insure the Intel XDK will work on my restricted network?

    Normally, access to the external servers that the Intel XDK uses is handled automatically by your proxy server. However, if you are working in an environment that has restricted Internet access and you need to provide your IT department with a list of URLs that you need access to in order to use the Intel XDK, then please provide them with the following list of domain names:

    • appcenter.html5tools-software.intel.com (for communication with the build servers)
    • s3.amazonaws.com (for downloading sample apps and built apps)
    • download.xdk.intel.com (for getting XDK updates)
    • debug-software.intel.com (for using the Test tab weinre debug feature)
    • xdk-feed-proxy.html5tools-software.intel.com (for receiving the tweets in the upper right corner of the XDK)
    • signin.intel.com (for logging into the XDK)
    • sfederation.intel.com (for logging into the XDK)

    Normally this should be handled by your network proxy (if you're on a corporate network) or should not be an issue if you are working on a typical home network.

    I cannot create a login for the Intel XDK, how do I create a userid and password to use the Intel XDK?

    If you have downloaded and installed the Intel XDK but are having trouble creating a login, you can create the login outside the Intel XDK. To do this, go to the Intel Developer Zone and push the "Join Today" button. After you have created your Intel Developer Zone login you can return to the Intel XDK and use that userid and password to login to the Intel XDK. This same userid and password can also be used to login to the Intel XDK forum.

    Installing the Intel XDK on Windows fails with a "Package signature verification failed." message.

    If you receive a "Package signature verification failed" message (see image below) when installing the Intel XDK on your system, it is likely due to one of the following two reasons:

    • Your system does not have a properly installed "root certificate" file, which is needed to confirm that the install package is good.
    • The install package is corrupt and failed the verification step.

    The first case can happen if you are attempting to install the Intel XDK on an unsupported version of Windows. The Intel XDK is only supported on Microsoft Windows 7 and higher. If you attempt to install on Windows Vista (or earlier) you may see this verification error. The workaround is to install the Intel XDK on a Windows 7 or greater machine.

    The second case is likely due to a corruption of the install package during download or due to tampering. The workaround is to re-download the install package and attempt another install.

    If you are installing on a Windows 7 (or greater) machine and you see this message it is likely due to a missing or bad root certificate on your system. To fix this you may need to start the "Certificate Propagation" service. Open the Windows "services.msc" panel and then start the "Certificate Propagation" service. Additional links related to this problem can be found here > https://technet.microsoft.com/en-us/library/cc754841.aspx

    See this forum thread for additional help regarding this issue > https://software.intel.com/en-us/forums/intel-xdk/topic/603992

    Troubles installing the Intel XDK on a Linux or Ubuntu system, which option should I choose?

    Choose the local user option, not root or sudo, when installing the Intel XDK on your Linux or Ubuntu system. This is the most reliable and trouble-free option and is the default installation option. This will insure that the Intel XDK has all the proper permissions necessary to execute properly on your Linux system. The Intel XDK will be installed in a subdirectory of your home (~) directory.

    Inactive account/ login issue/ problem updating an APK in store, How do I request account transfer?

    As of June 26, 2015 we migrated all of Intel XDK accounts to the more secure intel.com login system (the same login system you use to access this forum).

    We have migrated nearly all active users to the new login system. Unfortunately, there are a few active user accounts that we could not automatically migrate to intel.com, primarily because the intel.com login system does not allow the use of some characters in userids that were allowed in the old login system.

    If you have not used the Intel XDK for a long time prior to June 2015, your account may not have been automatically migrated. If you own an "inactive" account it will have to be manually migrated -- please try logging into the Intel XDK with your old userid and password, to determine if it no longer works. If you find that you cannot login to your existing Intel XDK account, and still need access to your old account, please send a message to html5tools@intel.com and include your userid and the email address associated with that userid, so we can guide you through the steps required to reactivate your old account.

    Alternatively, you can create a new Intel XDK account. If you have submitted an app to the Android store from your old account you will need access to that old account to retrieve the Android signing certificates in order to upgrade that app on the Android store; in that case, send an email to html5tools@intel.com with your old account username and email and new account information.

    Connection Problems? -- Intel XDK SSL certificates update

    On January 26, 2016 we updated the SSL certificates on our back-end systems to SHA2 certificates. The existing certificates were due to expire in February of 2016. We have also disabled support for obsolete protocols.

    If you are experiencing persistent connection issues (since Jan 26, 2016), please post a problem report on the forum and include in your problem report:

    • the operation that failed
    • the version of your XDK
    • the version of your operating system
    • your geographic region
    • and a screen capture

    How do I resolve build failure: "libpng error: Not a PNG file"?  

    f you are experiencing build failures with CLI 5 Android builds, and the detailed error log includes a message similar to the following:

    Execution failed for task ':mergeArmv7ReleaseResources'.> Error: Failed to run command: /Developer/android-sdk-linux/build-tools/22.0.1/aapt s -i .../platforms/android/res/drawable-land-hdpi/screen.png -o .../platforms/android/build/intermediates/res/armv7/release/drawable-land-hdpi-v4/screen.png
    
    Error Code: 42
    
    Output: libpng error: Not a PNG file

    You need to change the format of your icon and/or splash screen images to PNG format.

    The error message refers to a file named "screen.png" -- which is what each of your splash screens were renamed to before they were moved into the build project resource directories. Unfortunately, JPG images were supplied for use as splash screen images, not PNG images. So the files were renamed and found by the build system to be invalid.

    Convert your splash screen images to PNG format. Renaming JPG images to PNG will not work! You must convert your JPG images into PNG format images using an appropriate image editing tool. The Intel XDK does not provide any such conversion tool.

    Beginning with Cordova CLI 5, all icons and splash screen images must be supplied in PNG format. This applies to all supported platforms. This is an undocumented "new feature" of the Cordova CLI 5 build system that was implemented by the Apache Cordova project.

    Why do I get a "Parse Error" when I try to install my built APK on my Android device?

    Because you have built an "unsigned" Android APK. You must click the "signed" box in the Android Build Settings section of the Projects tab if you want to install an APK on your device. The only reason you would choose to create an "unsigned" APK is if you need to sign it manually. This is very rare and not the normal situation.

    My converted legacy keystore does not work. Google Play is rejecting my updated app.

    The keystore you converted when you updated to 3088 (now 3240 or later) is the same keystore you were using in 2893. When you upgraded to 3088 (or later) and "converted" your legacy keystore, you re-signed and renamed your legacy keystore and it was transferred into a database to be used with the Intel XDK certificate management tool. It is still the same keystore, but with an alias name and password assigned by you and accessible directly by you through the Intel XDK.

    If you kept the converted legacy keystore in your account following the conversion you can download that keystore from the Intel XDK for safe keeping (do not delete it from your account or from your system). Make sure you keep track of the new password(s) you assigned to the converted keystore.

    There are two problems we have experienced with converted legacy keystores at the time of the 3088 release (April, 2016):

    • Using foreign (non-ASCII) characters in the new alias name and passwords were being corrupted.
    • Final signing of your APK by the build system was being done with RSA256 rather than SHA1.

    Both of the above items have been resolved and should no longer be an issue.

    If you are currently unable to complete a build with your converted legacy keystore (i.e., builds fail when you use the converted legacy keystore but they succeed when you use a new keystore) the first bullet above is likely the reason your converted keystore is not working. In that case we can reset your converted keystore and give you the option to convert it again. You do this by requesting that your legacy keystore be "reset" by filling out this form. For 100% surety during that second conversion, use only 7-bit ASCII characters in the alias name you assign and for the password(s) you assign.

    IMPORTANT: using the legacy certificate to build your Android app is ONLY necessary if you have already published an app to an Android store and need to update that app. If you have never published an app to an Android store using the legacy certificate you do not need to concern yourself with resetting and reconverting your legacy keystore. It is easier, in that case, to create a new Android keystore and use that new keystore.

    If you ARE able to successfully build your app with the converted legacy keystore, but your updated app (in the Google store) does not install on some older Android 4.x devices (typically a subset of Android 4.0-4.2 devices), the second bullet cited above is likely the reason for the problem. The solution, in that case, is to rebuild your app and resubmit it to the store (that problem was a build-system problem that has been resolved).

    How can I have others beta test my app using Intel App Preview?

    Apps that you sync to your Intel XDK account, using the Test tab's green "Push Files" button, can only be accessed by logging into Intel App Preview with the same Intel XDK account credentials that you used to push the files to the cloud. In other words, you can only download and run your app for testing with Intel App Preview if you log into the same account that you used to upload that test app. This restriction applies to downloading your app into Intel App Preview via the "Server Apps" tab, at the bottom of the Intel App Preview screen, or by scanning the QR code displayed on the Intel XDK Test tab using the camera icon in the upper right corner of Intel App Preview.

    If you want to allow others to test your app, using Intel App Preview, it means you must use one of two options:

    • give them your Intel XDK userid and password
    • create an Intel XDK "test account" and provide your testers with that userid and password

    For security sake, we highly recommend you use the second option (create an Intel XDK "test account"). 

    A "test account" is simply a second Intel XDK account that you do not plan to use for development or builds. Do not use the same email address for your "test account" as you are using for your main development account. You should use a "throw away" email address for that "test account" (an email address that you do not care about).

    Assuming you have created an Intel XDK "test account" and have instructed your testers to download and install Intel App Preview; have provided them with your "test account" userid and password; and you are ready to have them test:

    • sign out of your Intel XDK "development account" (using the little "man" icon in the upper right)
    • sign into your "test account" (again, using the little "man" icon in the Intel XDK toolbar)
    • make sure you have selected the project that you want users to test, on the Projects tab
    • goto the Test tab
    • make sure "MOBILE" is selected (upper left of the Test tab)
    • push the green "PUSH FILES" button on the Test tab
    • log out of your "test account"
    • log into your development account

    Then, tell your beta testers to log into Intel App Preview with your "test account" credentials and instruct them to choose the "Server Apps" tab at the bottom of the Intel App Preview screen. From there they should see the name of the app you synced using the Test tab and can simply start it by touching the app name (followed by the big blue and white "Launch This App" button). Staring the app this way is actually easier than sending them a copy of the QR code. The QR code is very dense and is hard to read with some devices, dependent on the quality of the camera in their device.

    Note that when running your test app inside of Intel App Preview they cannot test any features associated with third-party plugins, only core Cordova plugins. Thus, you need to insure that those parts of your apps that depend on non-core Cordova plugins have been disabled or have exception handlers to prevent your app from either crashing or freezing.

    I'm having trouble making Google Maps work with my Intel XDK app. What can I do?

    There are many reasons that can cause your attempt to use Google Maps to fail. Mostly it is due to the fact that you need to download the Google Maps API (JavaScript library) at runtime to make things work. However, there is no guarantee that you will have a good network connection, so if you do it the way you are used to doing it, in a browser...

    <script src="https://maps.googleapis.com/maps/api/js?key=API_KEY&sensor=true"></script>

    ...you may get yourself into trouble, in an Intel XDK Cordova app. See Loading Google Maps in Cordova the Right Way for an excellent tutorial on why this is a problem and how to deal with it. Also, it may help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, especially item #3, to get a better understanding of why you shouldn't use the "browser technique" you're familiar with.

    An alternative is to use a mapping tool that allows you to include the JavaScript directly in your app, rather than downloading it over the network each time your app starts. Several Intel XDK developers have reported very good luck with the open-source JavaScript library named LeafletJS that uses OpenStreet as it's map database source.

    You can also search the Cordova Plugin Database for Cordova plugins that implement mapping features, in some cases using native SDKs and libraries.

    How do I fix "Cannot find the Intel XDK. Make sure your device and intel XDK are on the same wireless network." error messages?

    You can either disable your firewall or allow access through the firewall for the Intel XDK. To allow access through the Windows firewall goto the Windows Control Panel and search for the Firewall (Control Panel > System and Security > Windows Firewall > Allowed Apps) and enable Node Webkit (nw or nw.exe) through the firewall

    See the image below (this image is from a Windows 8.1 system).

    Google Services needs my SHA1 fingerprint. Where do I get my app's SHA fingerprint?

    Your app's SHA fingerprint is part of your build signing certificate. Specifically, it is part of the signing certificate that you used to build your app. The Intel XDK provides a way to download your build certificates directly from within the Intel XDK application (see the Intel XDK documentation for details on how to manage your build certificates). Once you have downloaded your build certificate you can use these instructions provided by Google, to extract the fingerprint, or simply search the Internet for "extract fingerprint from android build certificate" to find many articles detailing this process.

    Why am I unable to test or build or connect to the old build server with Intel XDK version 2893?

    This is an Important Note Regarding the use of Intel XDK Versions 2893 and Older!!

    As of June 13, 2016, versions of the Intel XDK released prior to March 2016 (2893 and older) can no longer use the Build tab, the Test tab or Intel App Preview; and can no longer create custom debug modules for use with the Debug and Profile tabs. This change was necessary to improve the security and performance of our Intel XDK cloud-based build system. If you are using version 2893 or older, of the Intel XDK, you must upgrade to version 3088 or greater to continue to develop, debug and build Intel XDK Cordova apps.

    The error message you see below, "NOTICE: Internet Connection and Login Required," when trying to use the Build tab is due to the fact that the cloud-based component that was used by those older versions of the Intel XDK work has been retired and is no longer present. The error message appears to be misleading, but is the easiest way to identify this condition. 

    How do I run the Intel XDK on Fedora Linux?

    See the instructions below, copied from this forum post:

    $ sudo find xdk/install/dir -name libudev.so.0
    $ cd dir/found/above
    $ sudo rm libudev.so.0
    $ sudo ln -s /lib64/libudev.so.1 libudev.so.0

    Note the "xdk/install/dir" is the name of the directory where you installed the Intel XDK. This might be "/opt/intel/xdk" or "~/intel/xdk" or something similar. Since the Linux install is flexible regarding the precise installation location you may have to search to find it on your system.

    Once you find that libudev.so file in the Intel XDK install directory you must "cd" to that directory to finish the operations as written above.

    Additional instructions have been provided in the related forum thread; please see that thread for the latest information regarding hints on how to make the Intel XDK run on a Fedora Linux system.

    The Intel XDK generates a path error for my launch icons and splash screen files.

    If you have an older project (created prior to August of 2016 using a version of the Intel XDK older than 3491) you may be seeing a build error indicating that some icon and/or splash screen image files cannot be found. This is likely due to the fact that some of your icon and/or splash screen image files are located within your source folder (typically named "www") rather than in the new package-assets folder. For example, inspecting one of the auto-generated intelxdk.config.*.xml files you might find something like the following:

    <icon platform="windows" src="images/launchIcon_24.png" width="24" height="24"/><icon platform="windows" src="images/launchIcon_434x210.png" width="434" height="210"/><icon platform="windows" src="images/launchIcon_744x360.png" width="744" height="360"/><icon platform="windows" src="package-assets/ic_launch_50.png" width="50" height="50"/><icon platform="windows" src="package-assets/ic_launch_150.png" width="150" height="150"/><icon platform="windows" src="package-assets/ic_launch_44.png" width="44" height="44"/>

    where the first three images are not being found by the build system because they are located in the "www" folder and the last three are being found, because they are located in the "package-assets" folder.

    This problem usually comes about because the UI does not include the appropriate "slots" to hold those images. This results in some "dead" icon or splash screen images inside the <project-name>.xdk file which need to be removed. To fix this, make a backup copy of your <project-name>.xdk file and then, using a CODE or TEXT editor (e.g., Notepad++ or Brackets or Sublime Text or vi, etc.), edit your <project-name>.xdk file in the root of your project folder.

    Inside of your <project-name>.xdk file you will find entries that look like this:

    "icons_": [
      {"relPath": "images/launchIcon_24.png","width": 24,"height": 24
      },
      {"relPath": "images/launchIcon_434x210.png","width": 434,"height": 210
      },
      {"relPath": "images/launchIcon_744x360.png","width": 744,"height": 360
      },

    Find all the entries that are pointing to the problem files and remove those problem entries from your <project-name>.xdk file. Obviously, you need to do this when the XDK is closed and only after you have made a backup copy of your <project-name>.xdk file, just in case you end up with a missing comma. The <project-name>.xdk file is a JSON file and needs to be in proper JSON format after you make changes or it will not be read properly by the XDK when you open it.

    Then move your problem icons and splash screen images to the package-assets folder and reference them from there. Use this technique (below) to add additional icons by using the intelxdk.config.additions.xml file.

    <!-- alternate way to add icons to Cordova builds, rather than using XDK GUI --><!-- especially for adding icon resolutions that are not covered by the XDK GUI --><!-- Android icons and splash screens --><platform name="android"><icon src="package-assets/android/icon-ldpi.png" density="ldpi" width="36" height="36" /><icon src="package-assets/android/icon-mdpi.png" density="mdpi" width="48" height="48" /><icon src="package-assets/android/icon-hdpi.png" density="hdpi" width="72" height="72" /><icon src="package-assets/android/icon-xhdpi.png" density="xhdpi" width="96" height="96" /><icon src="package-assets/android/icon-xxhdpi.png" density="xxhdpi" width="144" height="144" /><icon src="package-assets/android/icon-xxxhdpi.png" density="xxxhdpi" width="192" height="192" /><splash src="package-assets/android/splash-320x426.9.png" density="ldpi" orientation="portrait" /><splash src="package-assets/android/splash-320x470.9.png" density="mdpi" orientation="portrait" /><splash src="package-assets/android/splash-480x640.9.png" density="hdpi" orientation="portrait" /><splash src="package-assets/android/splash-720x960.9.png" density="xhdpi" orientation="portrait" /></platform>

    Back to FAQs Main

    Intel® Math Kernel Library Benchmarks (Intel® MKL Benchmarks)

    $
    0
    0

    Intel MKL Benchmarks package includes Intel® Optimized LINPACK Benchmark,  Intel® Optimized MP LINPACK Benchmark for Clusters, and Intel® Optimized High Performance Conjugate Gradient Benchmark from the latest Intel MKL release. Use the links in the table below to download package for Linux*, Windows* or OS X*.

    By downloading any sample package you accept the End User License Agreement

     

     

    Package

    Release Date

    Download Size

    Package Contents

    Intel Optimized LINPACK Benchmark

    Intel Optimized MP LINPACK
    Benchmark for Clusters

    Intel Optimized High Performance Conjugate Gradient Benchmark (v 3.0)

     

     

     

    Source

    Binary

    Source

    Binary

    Source

    Binary

    Linux* package(l_mklb_p_2017.1.013) (.tgz)

    Sep 1, 2016

    22 MB

     

    X

    X

    X

    X

    X

     

    Windows* package(w_mklb_p_2017.1.014) (.zip)

    Sep 1, 2016

    15 MB

     

    X

    X

    X

     

     

     

    OS X* package(m_mklb_p_2017.1.014) (.tgz)

    Sep 1, 2016

    3 MB

     

    X

     

     

     

     

    Optimization Notice in English

    Intel® Xeon Phi™ Processor Software Optimization Guide

    $
    0
    0

    This document targets engineers interested in optimizing code for improved performance on the Intel® Xeon Phi™ processor. The manual begins with a high level description of the Intel® Xeon Phi™ processor micro-architecture. It follows with several topics that have the highest impact on performance on Intel® Xeon Phi™ AVX512 instructions, Memory Subsystems, Micro-architectural Nuances, Compiler Knobs & Directives, Numeric sequences, MCDRAM as Cache, and Scalar versus Vector Coding.

    Integration Wrappers for Intel® Integrated Performance Primitives (Intel® IPP)

    $
    0
    0

    To provide easy-to-use APIs and reduce the effort required to add Intel® Integrated Performance Primitives (Intel® IPP) functions to your application, Intel® IPP library introduces new Integration Wrappers APIs. These APIs aggregate multiple Intel® IPP functions and provide easy interfaces to support external threading of Intel® IPP functions. A technical preview of Integration Wrappers functionality is now available for evaluation.

    Integration Wrappers consist of C and C++ interfaces:

    • C interface aggregates Intel IPP functions of similar functionality with various data types and channels into one function. Initialization steps required by several Intel IPP functions are implemented in one initialization function for each functionality. To reduce the size of your code and save time required for integration, the wrappers handle all memory management and Intel IPP function selection routines.
    • C++ interface wraps around the C interface to provide default parameters, easily initialized objects as parameters, exception handling, and objects for complex Intel IPP functions with automatic memory management for specification structures.

    Integration Wrappers are available as a separate download in the form of source and pre-built binaries.

    1. Intel® IPP Integration Wrappers Preview Overview

    1.1 Key Features

    Integration Wrappers simplify usage of Intel IPP functions and address some of the advanced use cases of Intel IPP. They consist of the C and C++ APIs which provide the following key features:

    C interface provides compatibility with C libraries and applications and enables you to use the following features of Integration Wrappers:

    • Automatic selection of the proper Intel IPP function based on input parameters
    • Automatic handling of temporary memory allocations for Intel IPP functions
    • Improved tiling handling and automatic borders processing for tiles
    • Memory optimizations for threading

    C++ interface additionally provides:

    • Easier to use classes like IwSize (image size structure in Integration Wrappers)  instead of IppiSize (image size structure in Intel IPP functions), IwRect instead of IppiRect, and IwValue as a unified scalar parameter for borders and other per-channel input values
    • Complex Intel IPP functions designed as classes to use automatic construction and destruction features

    The following two code examples implement the image resizing operation with two APIs 1) Resizing image with the Intel® IPP functions 2) Resizing image using the Intel® IPP Integration Wrappers APIs.  The second implementation is much simpler requires less effort to use the Intel IPP functions.

    1.Image Resizing with Intel® IPP functions

    {……
         ippSts = ippiResizeGetSize_8u(srcSize, dstSize, ippLinear, 0, &specSize, &initSize);
         if(ippSts < 0) return ippSts;
         //allocate internal buffer
         pSpec = (IppiResizeSpec_32f*)ippsMalloc_8u(specSize);
         if(specSize && !pSpec) return STS_ERR_ALLOC;
         //allocate initialization buffer
         pInitBuf = ippsMalloc_8u(initSize);
         //init ipp resizer
         ippSts = ippiResizeLinearInit_8u(srcSize, dstSize, pSpec);
         ippSts = ippiResizeGetSrcRoi_8u(pSpec, dstRoiOffset, dstRoiSize, &srcRoiOffset, &srcRoiSize);
         // adjust input and output buffers to current ROI
         unsigned char *pSrcPtr = pSrc + srcRoiOffset.y*srcStep + rcRoiOffset.x*CHANNELS;
         unsigned char *pDstPtr = pDst + dstRoiOffset.y*dstStep + dstRoiOffset.x*CHANNELS;
         ippSts = ippiResizeGetBufferSize_8u(pSpec, dstRoiSize, CHANNELS, &bufferSize);
         pBuffer = ippsMalloc_8u(bufferSize);
         // perform resize
         ippSts = ippiResizeLinear_8u_C1R(pSrcPtr, srcStep, pDstPtr, dstStep, dstRoiOffset, dstRoiSize, ippBorderRepl, 0, pSpec, pBuffer);
         .......
    }

    2. Image Resize with Intel® IPP Integration Wrappers (C++ interface)

    { ......
          //Initialization
          IppDataType dataType = ImageFormatToIpp(src.m_sampleFormat);
          ipp::IwiSize srcSize(ImageSizeToIpp(src.m_size));
          ipp::IwiSize dstSize(ImageSizeToIpp(dst.m_size));
          m_resize.InitAlloc(srcSize, dstSize, dataType, src.m_samples, interpolation, ipp::IwiResizeParams(), ippBorderRepl);
          //Run
          ipp::IwiImage iwSrc = ImageToIwImage(src);
          ipp::IwiImage iwDst = ImageToIwImage(dst);
          ipp::IwiRect rect((int)roi.x, (int)roi.y, (int)roi.width, (int)roi.height);
          ipp::IwiRoi  iwRoi(rect);
          m_resize(&iwSrc, &iwDst, &iwRoi);
      ......
    }

    1.2 Threading

    The API of Integration Wrappers (IW) is designed to simplify tile-based processing of images. Tiling is based on the concept of region of interest (ROI).
    Most IW image processing functions operate not only on whole images but also on image areas - ROIs. Image ROI is a rectangular area that is either some part of the image or the whole image.

    The sections below explain the following IW tiling techniques:

    Manual tiling

    IW functions are designed to be tiled using the IwiRoi interface. But if for some reasons automatic tiling with IwiRoi is not suitable, there are special APIs to perform tiling manually.

    When using manual tiling you need to:

    • Shift images to a correct position for a tile using iwiImage_GetRoiImage
    • If necessary, pass correct border InMem flags to a function using iwiRoi_GetTileBorder
    • If necessary, check the filter border around the image border using iwiRoi_CorrectBorderOverlap

    Here is an example of IW threading with OpenMP* using manual tiling:

    #include "iw++/iw.hpp"
    #include <omp.h>
    
    int main(int, char**)
    {
        // Create images
        ipp::IwiImage srcImage, cvtImage, dstImage;
        srcImage.Alloc(ipp::IwiSize(320, 240), ipp8u, 3);
        cvtImage.Alloc(srcImage.m_size, ipp8u, 1);
        dstImage.Alloc(srcImage.m_size, ipp16s, 1);
        int threads = omp_get_max_threads(); // Get threads number
        ipp::IwiSize   tileSize(dstImage.m_size.width, (dstImage.m_size.height + threads - 1)/threads); // One tile per thread
        IppiBorderSize sobBorderSize = iwiSizeToBorderSize(iwiMaskToSize(ippMskSize3x3)); // Convert mask size to border size
        #pragma omp parallel num_threads(threads)
        {
            // Declare thread-scope variables
            IppiBorderType border;
            ipp::IwiImage srcTile, cvtTile, dstTile;
            // Color convert threading
            #pragma omp for
            for(IppSizeL row = 0; row < dstImage.m_size.height; row += tileSize.height)
            {
                ipp::IwiRect tile(0, row, tileSize.width, tileSize.height); // Create actual tile rectangle
                // Get images for current ROI
                srcTile = srcImage.GetRoiImage(tile);
                cvtTile = cvtImage.GetRoiImage(tile);
                // Run functions
                ipp::iwiColorConvert_RGB(&srcTile, iwiColorRGB, &cvtTile, iwiColorGray);
            }
            // Sobel threading
            #pragma omp for
            for(IppSizeL row = 0; row < dstImage.m_size.height; row += tileSize.height)
            {
                ipp::IwiRect tile(0, row, tileSize.width, tileSize.height); // Create actual tile rectangle
                iwiRoi_CorrectBorderOverlap(sobBorderSize, cvtImage.m_size, &tile); // Check borders overlap and correct tile of necessary
                border = iwiRoi_GetTileBorder(ippBorderRepl, sobBorderSize, cvtImage.m_size, tile); // Get actual tile border
                // Get images for current ROI
                cvtTile = cvtImage.GetRoiImage(tile);
                dstTile = dstImage.GetRoiImage(tile);
                // Run functions
                ipp::iwiFilterSobel(&cvtTile, &dstTile, iwiDerivHorFirst, ippMskSize3x3, border);
            }
        }
    }

    Basic tiling

    You can use basic tiling to tile or thread one standalone function or a group of functions without borders. To apply basic tiling, initialize the IwiRoi structure with the current tile rectangle and pass it to the processing function.

    For functions operating with different sizes for source and destination images, use the destination size as a base for tile parameters.

    Here is an example of IW threading with OpenMP* using basic tiling with IwiRoi:

    #include "iw++/iw.hpp"
    #include <omp.h>
    
    int main(int, char**)
    {
        // Create images
        ipp::IwiImage srcImage, cvtImage, dstImage;
        srcImage.Alloc(ipp::IwiSize(320, 240), ipp8u, 3);
        cvtImage.Alloc(srcImage.m_size, ipp8u, 1);
        dstImage.Alloc(srcImage.m_size, ipp16s, 1);
    
        int            threads = omp_get_max_threads(); // Get threads number
        ipp::IwiSize   tileSize(dstImage.m_size.width, (dstImage.m_size.height + threads - 1)/threads); // One tile per thread
    
        #pragma omp parallel num_threads(threads)
        {
            // Declare thread-scope variables
            ipp::IwiRoi  roi;
    
            // Color convert threading
            #pragma omp for
            for(IppSizeL row = 0; row < dstImage.m_size.height; row += tileSize.height)
            {
                roi = ipp::IwiRect(0, row, tileSize.width, tileSize.height); // Initialize IwiRoi with current tile rectangle
    
                // Run functions
                ipp::iwiColorConvert_RGB(&srcImage, iwiColorRGB, &cvtImage, iwiColorGray, IPP_MAXABS_64F, &roi);
            }
    
            // Sobel threading
            #pragma omp for
            for(IppSizeL row = 0; row < dstImage.m_size.height; row += tileSize.height)
            {
                roi = ipp::IwiRect(0, row, tileSize.width, tileSize.height); // Initialize IwiRoi with current tile rectangle
    
                // Run functions
                ipp::iwiFilterSobel(&cvtImage, &dstImage, iwiDerivHorFirst, ippMskSize3x3, ippBorderRepl, 0, &roi);
            }
        }
    }

    Pipeline tiling

    With the IwiRoi interface you can easily tile pipelines by applying a current tile to an entire pipeline at once instead of tiling each function one by one. This operation requires borders handling and tracking pipeline dependencies, which increases complexity of the API. But when used properly, pipeline tiling can increase scalability of threading or performance of non-threaded functions by performing all operations inside the CPU cache.

    Here are some important details that you should take into account when performing pipeline tiling:

    1. Pipeline tiling is performed in reverse order: from destination to source, therefore:
      • Use the tile size based on the destination image size
      • Initialize the IwiRoi structure with the IwiRoiPipeline_Init for the last operation
      • Initialize the IwiRoi structure for other operations from the last to the first with IwiRoiPipeline_InitChild
    2. Obtain the border size for each operation from its mask size, kernel size, or using the specific function returning the border size, if any.
    3. If you have a geometric transform inside the pipeline, fill in the IwiRoiScale structure for IwiRoi for this transform operation.
    4. In case of threading, copy initialized IwiRoi structures to a local thread or initialize them on a per-thread basis. Access to structures is not thread-safe.
    5. Do not exceed the maximum tile size specified during initialization. Otherwise, this can lead to buffers overflow.

    The IW package contains several advanced tiling examples, which can help you understand the details of the process. For more information on how to find and use these examples, please download package and view contained developer reference for Integration Wrappers for Intel IPP.

    The following example demonstrates IW threading with OpenMP* using IwiRoi pipeline tiling:

    #include "iw++/iw.hpp"
    #include <omp.h>
    
    int main(int, char**)
    {
        // Create images
        ipp::IwiImage srcImage, dstImage;
        srcImage.Alloc(ipp::IwiSize(320, 240), ipp8u, 3);
        dstImage.Alloc(srcImage.m_size, ipp16s, 1);
        int threads = omp_get_max_threads(); // Get threads number
        ipp::IwiSize   tileSize(dstImage.m_size.width, (dstImage.m_size.height + threads - 1)/threads); // One tile per thread
        IppiBorderSize sobBorderSize = iwiSizeToBorderSize(iwiMaskToSize(ippMskSize3x3)); // Convert mask size to border size
    
        #pragma omp parallel num_threads(threads)
        {
            // Declare thread-scope variables
            ipp::IwiImage       cvtImage;
            ipp::IwiRoiPipeline roiConvert, roiSobel;
            roiSobel.Init(tileSize, dstImage.m_size, &sobBorderSize); // Initialize last operation ROI first
            roiConvert.InitChild(&roiSobel); // Initialize next operation as a dependent
            // Allocate intermediate buffer
            cvtImage.Alloc(roiConvert.GetDstBufferSize(), ipp8u, 1);
            // Joined pipeline threading
            #pragma omp for
            for(IppSizeL row = 0; row < dstImage.m_size.height; row += tileSize.height)
            {
                roiSobel.SetTile(ipp::IwiRect(0, row, tileSize.width, tileSize.height)); // Set IwiRoi chain to current tile coordinates
                // Run functions
                ipp::iwiColorConvert_RGB(&srcImage, iwiColorRGB, &cvtImage, iwiColorGray, IPP_MAXABS_64F, &roiConvert);
                ipp::iwiFilterSobel(&cvtImage, &dstImage, iwiDerivHorFirst, ippMskSize3x3, ippBorderRepl, 0, &roiSobel);
            }
        }
    }

    2. Getting Started

    2.1 Getting Started Document

    Getting Started instructions are provided in the Integration Wrappers Developer Guide and Reference.  You can find this document the following folder available after you install the Intel® IPP Integration Wrappers Preview package: /interfaces/integration_wrappers.

    You can find the following getting started information in the referenced above document:

    • Building Integration Wrappers and Examples
    • Using Integration Wrappers Examples
    • C/C++ API Reference for the Integration Wrappers

    2.2 Examples Code

    Intel® IPP Integration Wrappers Preview package contains code samples demonstrating how to use these APIs. The example files are located at:  interfaces/integration_wrappers/examples/iw_resize.
    These examples demonstrate some of the IW features and help you get started with the IW library.

    3. Support

    If you have any problems with Intel® IPP Integration Wrappers Preview, post your questions at Intel® IPP forum.  If you already register your Intel® software product at the Intel® Software Development Products Registration Center, you can also submit your question by Intel® Premier Support.

    4. Download and Installation

    1. Intel® IPP Integration Wrappers is an add-on library for Intel IPP package. Please install Intel® IPP main package first before using Intel IPP Integration Wrappers.Intel IPP is available as part of Intel® Parallel Studio XE, or Intel® System Studio products. It is also available as a standalone package with the Community License. Any of these products can work with Intel IPP Integration Wrappers.
    2. Use the links in the table below to download the Intel IPP IW package for Linux*, Windows*, or OS X*. To install the Libraries, put the archive file into the IPPROOT/ interfaces folder, and extract all files from the archive.   
        By downloading any sample package you accept the End User License Agreement
    3. Check the “Getting Started” document in the package to learn the library.
      Windows*

      w_ippiw_p_2017.1.010.zip

      Linux*

      l_ippiw_p_2017.1.009.tgz

      OS X*

      m_ippw_p_2017.1.009.tgz


    Rewiring Shakespeare with Elsinore

    $
    0
    0

    Download   [PDF 1.05 MB]

    Moonlighting game developers shake up the adventure genre to win 2016 Intel® Level Up Contest award

    Making a video game on the side when you already have a full-time job is a major undertaking. Presuming to rewrite Shakespeare is quite another thing entirely. Combining the two, however, takes game development chutzpah to a whole new level.

    To paraphrase the character Polonius in Shakespeare’s Hamlet: “Neither an employee nor an independent be,” to which we’d add, “not when you can be both.” This is the approach of the team at Golden Glitch Studios. Led by Katie Chironis, who in her day job is a VR game designer at Oculus, Golden Glitch is making Elsinore, a bold evolution in the point-and-click genre that won Best Adventure/Role Playing game in the 2016 Intel® Level Up Contest.

    Elsinore title-screen

    Figure 1:The Elsinore title screen showing the playable character Ophelia, and the hand-painted art style.

    Elsinore is a real labor of love. Nearly every member of its 11-strong team holds a full-time job in the games industry and works on the project in their free time. This “moonlighting model” of game development is made possible by the lack of geographical barriers to online collaboration and a team that is motivated by more than simple financial reward.

    That’s not to say that the team’s chosen modus operandi doesn’t come with its own particular challenges. How Katie, her game designer right-hand Connor Fallon, and the rest of the talented team are making it happen, is, like Hamlet itself, a story worth telling.

    Questioning Shakespeare

    Katie’s initial encounter with Shakespeare’s best-known opus wasn’t promising: “Like most other kids, I was forced to read Hamlet. I definitely didn’t appreciate it at the time.” A college course brought the play to her attention again, this time in a more flattering light, and Katie realized there were some questions she needed answering.

    “The thing that stuck out to me was that Ophelia didn’t get what she deserved,” said Katie. “Hamlet can withstand all this crazy stuff, and manages not to lose his mind, but Ophelia, the second her dad dies and Hamlet rejects her, goes completely insane and drowns herself. That to me felt really unbalanced.”

    The final impetus to turn Ophelia’s story into a game came from a Shakespeare-themed contest held by Carnegie Mellon University’s Game Creation Society, of which both Katie and Connor were members. Once they started down the path with Elsinore, they realized they were onto something big, and, after a couple of summers developing the idea, followed by graduation, they decided to knuckle down and make it a reality.

    Tragic Transformation

    As a reformed scholar of Shakespeare, Katie had few qualms about rewriting the Hamlet narrative to bring the balance she sought, safe in the knowledge that Shakespeare himself had indulged in something of a fan-fiction rewrite, having drawn on the Norse legend of Amleth.

    In this spirit, Katie took a number of “liberties” which included making Ophelia a woman of color of Moorish descent, adding more women to the ensemble, and pouring something of herself into Ophelia’s character. “Ophelia shares a lot of flaws that I personally have,” said Katie. “I write a lot of myself in her. That’s been my personal highlight on the project.”

    Elsinore option screen

    Figure 2:Screenshot showing the options Ophelia has for sharing information with Hamlet, each of which bears its own consequences.

    The re-imagining provided an opportunity to play around with the point-and-click adventure genre, of which the team are ardent fans, bringing new ideas and mechanics into play. The most notable of these are the time-looping story in which the player, as Ophelia, repeatedly relives the same 48-hour period and the responsive narrative system that reacts to the information you share with NPCs, none of which they forget, despite the resetting clock.

    “We set out to create a game that makes you feel like you are living through hell at the end of the world, but you get to know the people who are along for the ride with you,” said Katie. “There is something very compelling about repeatedly facing down the same tragedy,” added Connor.

    Moonlighting Strangers

    “It is really, really hard to make a game in your spare time when you work in the games industry full time,” admitted Katie. “I didn’t fully appreciate that when I started.” Katie, Connor, and the rest of their original collaborators, were committed however, so they began the process of pulling together a complete team of like-minded compatriots.

    The initial team was comprised of members of the Game Creation Society they were part of at Carnegie Mellon. After graduating from the Pennsylvania college, the team was scattered to the four winds in search of work. “Some of us are in academia, and some of us are in the games industry,” said Connor. “Oculus, ArenaNet, Double Fine, Telltale...we run the spectrum.”

    Showcasing artist Wesley Martin

    Figure 3:Still showcasing the artist Wesley Martin’s hand-painted work and a character’s bloody fate.

    Most, however, eventually found themselves heading in the same direction: the West Coast’s game development hubs. With team members based in Seattle, San Francisco, and Los Angeles, as well as a couple of stragglers still resisting the pull of the West Coast, it remains a challenge to get people under the same roof.

    To help keep things on track, they use a number of online tools. “We love Trello*,” said Katie. “We rely heavily on Trello and Google* Docs to make sure that we are synced up on workloads and current tasks. [Engineers] Eric and Kristen have also hand-developed a number of tools in Unity* that we use to track information about the game,” she added. “Those have been incredibly helpful.”

    One interesting addition to the team is composer Adam Gubman, who creates music for high-profile clients including Disney and NBC, and game publishers Square-Enix, Activision, and Ubisoft. Katie credits him with more than the music on Elsinore: “He was my piano teacher when I was a kid,” she explained. “Back then he was in college to become a game composer. He was one of the first people in my life that made me realize that I could have a career in games. The fact that he decided to join the project is something I'll always be grateful for.”

    Fringe Benefits

    In 2015, the Golden Glitch team made the decision to run a Kickstarter* campaign to help fund the nascent Elsinore. Despite the campaign’s intensity and its baptism of fire in community and social media marketing, it proved worthwhile. “I’m really glad we did it,” said Katie. “It was stressful at the time, but we didn’t want to seek out a publisher because it’s not our full-time job, so there was no other avenue of funding open to us.”

    Not only did they hit the initial target in three days and smash through numerous stretch goals to reach more than USD $32,000, they also found themselves Greenlit on Steam* in under 48 hours. Katie fondly remembers receiving an email from a Valve* staffer along the lines of, “Hey, in my former life I was an English teacher and I taught Hamlet to my students, I love your game. It’s been passed though.”

    Storyboard art
    Figure 4:Storyboard art showing Hamlet and the ghost.

    One side-effect of any successful Kickstarter campaign is a committed community of fans following and supporting the project, but what Golden Glitch didn’t anticipate was tapping into the huge community of Shakespeare fans on Tumblr*. In addition, many members of the current team contacted Golden Glitch as a direct result of the Kickstarter, offering their professional expertise to make the game happen.

    “After the Kickstarter, we took on a wave of people: our composer, Adam; our sound designer, Steve; our 3D-animator, Becca; and our cinematic artist, Tati,” said Connor. And while the original core team continued to work for free, the Kickstarter funding meant they could pay their new hires. “I think a Kickstarter is something that every indie game should consider,” added Katie.

    Rough and Smooth

    One obvious handicap with the moonlighting model is the limited time the team can dedicate to the game. “Elsinore moves more slowly than other indie games,” affirmed Katie. “We can’t iterate as quickly, or devote time to creating. It’s really frustrating not being able to move and react as quickly as a full-time game.”

    “Real life gets in the way a lot of the time. I have so little time to spend on Elsinore already, that when other things blow up, the game has to be put on the back burner,” continued Katie. “That’s just how it has to be, because it’s not making us any money right now.”

    The part-time approach means that efficiency is vital to the development process, which Katie sees as a definite plus: “The upside is that we really carefully consider every single decision that goes into the game, whereas, if you’re working under a tight deadline at work, you might be rushing decisions because you just have to get it in.” The hope is that the careful deliberation about how to best use the limited resources available will result in a better game. Based on Elsinore’s reception to date, that theory appears to be accurate.

    Side-by-side screenshots

    Figure 5:Side-by-side screenshots showing the visual evolution of Elsinore’s dungeons.

    Ultimately, the combination of Kickstarter funding, the lack of daily overheads, and moonlighting colleagues working for love makes for a creative freedom without which Elsinore may not have happened. “We don’t have a publisher, we don’t need anyone’s money, and we can work to a schedule that makes us happy,” affirmed Kate.

    But while Katie, Connor, and the team naturally want to see Elsinore succeed, some of their immediate goals are more humble and symptomatic of the remote-working model. “We have not yet had the chance to all meet up in the same place, at the same time,” admitted Katie. “It’s on our bucket list.”

    Levelling Up

    In early 2016, the team decided to enter Elsinore in the Adventure/Role Playing category of the Intel Level Up Contest. “We’ve had our eye on the contest for a couple of years,” said Katie. “I think it’s a little more democratic than a lot of indie game festivals and submission groups. I was immediately drawn to it.”

    Looking at the game from the point-of-view of the first-time players who would judge the entries was a valuable exercise for the team. “Submitting your project to a contest forces you to consider all the little things that affect how people see the game,” said Connor. “It encourages you to zero-in on what areas need touching up, that you might have been putting off.”

    From that perspective, the impetus gained from entering would have been reward in itself. The contest judges, however, had other ideas. The panel, comprised of games industry luminaries including writers Chris Avellone and Anne Toole, Vlambeer’s Rami Ismail, and Double Fine’s Tim Schafer, saw fit to bestow the Best Adventure/Role Playing Game award on Elsinore. “We thought there’s no way in hell that we’ll win this thing, but we should submit anyway just to see,” said Katie. “We were very, very happy when we won.”

    Intel Level Up Contest award

    Figure 6:The Intel Level Up Contest award for Elsinore.

    The benefits of winning went far beyond the cash prize. Following the win, Intel invited the team to demo Elsinore on the show-floor at PAX West, giving them an opportunity to put the game into the hands of hundreds of players and gather valuable feedback. “The exposure that we have gotten as a result has been incredible, and something that would never have come to us otherwise,” said Katie.

    Just as important was the morale and motivational boost to the team. “This is our side project, so we don’t get a lot of the normal feedback, so the fact that we were selected was this enormous motivating push for us,” said Katie. “Motivation is basically your currency when it’s a side project, so that’s huge.”

    That’s not to say the extra cash isn’t being put to good use. “It actually helped us bring team members up for PAX to demo at the Intel booths,” said Katie. “It’s also going to be really useful in getting our voice acting as top notch as it can be.”

    Elsinore at PAX West

    Figure 7:Katie Chironis (left), with a visitor playing Elsinore, on the Intel stand at PAX West 2016.

    The benefits of entering, and ultimately winning, the contest have turned the team into committed Intel Level Up Contest advocates. “When you work on something for so long, it’s really nice to be validated,” said Connor. “The whole experience has given us another boost of momentum.” Katie added her endorsement: “I would absolutely encourage people to enter.”

    The Home Straight

    It’s an easy assumption to make that the members of the Elsinore team harbor dreams of quitting their day jobs, and turning Golden Glitch Studios into a full-time gig. However, that’s not the goal at all. “Honestly, I really like my day job,” said Katie. “I don’t think any of us have any plans to leave our jobs.”

    In fact, they believe that working on a personal game project on the side can bring important benefits, and not only to the individual. “I personally like the setup of having a day job, and working on artsy games like Elsinore on the side,” said Connor. “It only helps our professional lives; things you learn on one project benefit the other, and it gives you an opportunity to stretch yourself in different ways.”

    For this reason, Katie believes gaming companies should actively encourage their staff to develop side projects. “It has allowed us to gain skills that we never would have otherwise, and bring them back to the job for free,” she said. “I now have skills in marketing and PR, releasing a game, and showing it publicly that cost my employer nothing. It’s basically self-motivated training.”

    And should the game find its audience, they expect more extra-curricular opportunities to arise. “I’d say the biggest benefit that would come with success would be having more clout and reputation to assemble and promote future side projects,” said Connor, with the clear intention of making the “moonlighting model” an ongoing feature of his working life.

    Courtyard of Elsinore

    Figure 8:The courtyard of Elsinore showing the game’s main interface, top left.

    For now, however, Golden Glitch Studios has a game to finish. Thanks to the visibility Elsinore has gained through its Kickstarter, from the Intel Level Up Contest, and exposure at GDC and PAX shows, the game has a growing fan base. “So many people have reached out to tell us they’re connecting with it,” Katie said. “It’s become something way larger than any of us thought it would. It’s kind of wild, to tell you the truth.”

    Elsinore by Golden Glitch Studios is coming to PC via Steam in 2017.

    For more information about Elsinore and Golden Glitch Studios, visit https://elsinore-game.com/

    For details on the Intel® Level Up Game Developer Contest and this year’s winners, go to: https://software.intel.com/sites/campaigns/levelup2016/

    Elsinore is an IndieCade* Festival 2016 finalist. Read more about the festival here: http://www.indiecade.com/2016/games.

    Follow Golden Glitch Studios on Twitter: https://twitter.com/goldenglitch.

    Analyzing Intel® MPI applications using Intel® Advisor

    $
    0
    0

    Many of today’s HPC applications use Intel® MPI to implement their parallelism. However, using Intel’s analyzer tools in a multi-process environment can be tricky. Intel® Advisor can be very helpful to maximize your vectorization, memory and threading performance. To analyze Intel MPI applications using Intel Advisor you should follow these steps to get the best value out of your results.

     

    Analyzing Intel® MPI applications using Intel® Advisor

    Remote analysis flow

    Collecting results using –no-auto-finalize

    Generating an MPI command-line using the Intel Advisor GUI

    Generating the command line for Survey or Trip Counts analyses

    Generating the command line for memory access pattern analysis

    Generating the command line for dependencies analysis

    Viewing the collected results in the GUI

    Viewing the collected results without the GUI

    Conclusion

     

    Remote analysis flow

    1. Collect data using the command-line on the target
      1. mpirun -n 1 -gtool"advixe-cl -collect survey –no-auto-finalize -project-dir /user/test/vec_project:0" /user/test/vec_samples/vec_samples
    2. Pack (optional) & retrieve results
      1. advixe-cl --snapshot --project-dir /user/test/vec_project --pack --cache-sources --cache-binaries -- /tmp/my_proj_snapshot
      2. Copy vec_project or my_proj_snapshot to the host
    3. View on the host in the Advisor GUI
      1. Open the project
        1. advixe-guivec_project
      2. Open Project Properties
        1. Set up search paths in Project Properties
      3. Open the results

     

    Collecting results using –no-auto-finalize

    On some platforms like the Intel® Xeon Phi processor result finalization may take long time. In such cases you can specify the –no-auto-finalize option so that finalization does not happen on your target but when open the results on your host. If you specify this option then the results will finalize when you open them in the GUI.

    MPI command line examples

    Collect survey

    mpirun -n 1 -gtool "advixe-cl -collect survey –no-auto-finalize -project-dir /user/test/vec_project:0" /user/test/vec_samples/vec_samples

    To run non-Intel® MPI use the –trace-mpi option as follows:

    mpirun -n 1 -gtool "advixe-cl -collect survey –trace-mpi –no-auto-finalize -project-dir /user/test/vec_project:0" /user/test/vec_samples/vec_samples

    Collect tripcounts

    mpirun -n 1 -gtool "advixe-cl -collect tripcounts –no-auto-finalize -project-dir /user/test/vec_project:0" /user/test/vec_samples/vec_samples

    Collect dependencies

    Note: You need get the list of loops to analyze from either the report command or the Intel Advisor GUI.

    mpirun -n 1 -gtool "advixe-cl -collect dependencies -mark-up-list=6,7,8,9,10 –no-auto-finalize -project-dir /user/test/vec_project:0" /user/test/vec_samples/vec_samples

    Collect map

    Note: You need get the list of loops to analyze from either the report command or the Intel Advisor GUI.

    mpirun -n 1 -gtool "advixe-cl -collect map –no-auto-finalize -mark-up-list=6,7,8,9,10 -project-dir /user/test/vec_project:0" /user/test/vec_samples/vec_samples

    Generating an MPI command-line using the Intel Advisor GUI

    For MPI applications you need to collect your Intel Advisor results using the command-line. We make this process easy by having the Intel Advisor GUI give you the precise command-lines you need to run. There are two ways to get the command-line; the first is using the project properties.

    You have the option of choosing Intel MPI or another version of MPI. You can also specify the number of ranks you would like to run.

    Generating the command line for Survey or Trip Counts analyses

    You can also get the command-line by clicking on command-line button , right next to the collect button as shown here. Once you have generated the command-line you would then need to cut and paste this line to a terminal window and run the command. It is sometimes helpful to specify the –no-auto-finalize option. If this option is specified then the results will finalize when you open them in the GUI.

    Here is a Survey command:

    Here is a Trip Counts command:

    Notice the –gtool option used above. This is an Intel MPI option; it allows our analyzers to only analyze the selected group of ranks. In this case we are only analyzing rank 0. If you were to not use –gtool and specified an MPI application with 10 ranks then ten invocations of Intel Advisor would be launched.

    Generating the command line for memory access pattern analysis

    To analyze the memory patterns in your application, you can select the loops in the survey view.

    Then click on the command-line button.

    Generating the command line for dependencies analysis

    To check the dependencies of your loop, you again would need to select the loops you would like to analyze and then select the command-line button.

    Viewing the collected results in the GUI

    Once you have collected your results, you will need to view them. The best way to do this is using the Intel Advisor GUI. If you specified the –no-auto-finalize option it is important to open your Project and then use the “Project Properties” to set the Paths to your binaries and sources. You need to do this before you open the results so we will be able to finalize them properly.

    Then click on the "Survey" tab

     

    Viewing the collected results without the GUI

     

    You also have the option to view your Intel Advisor results without using the GUI. You can either generate a text report or a CSV report.

    Text mode:

    advixe-cl -report summary -project-dir ./advi -format text -report-output ./out/summary.txtadvixe-cl -report survey -project-dir ./advi -format text -report-output ./out/survey.txtadvixe-cl -report map -project-dir ./advi -format text -report-output ./out/map.txtadvixe-cl -report dependencies -project-dir ./advi -format text -report-output ./out/dependencies.txt

    CSV mode:

    advixe-cl -report summary -project-dir ./advi -format csv -csv-delimiter tab -report-output summary.csvadvixe-cl -report survey -project-dir ./advi -format csv -csv-delimiter tab -report-output survey.csvadvixe-cl -report map -project-dir ./advi -format csv -csv-delimiter tab -report-output map.csvadvixe-cl -report dependencies -project-dir ./advi -format csv -csv-delimiter tab -report-output dependencies.csv

    Conclusion

    Intel Advisor is a must-have tool for getting the most performance out of your MPI programs.

    • To obtain Advisor results for an MPI application:
    • Collect using CLI. You can generate command line from Advisor GUI. If finalization is too slow, use “-no-auto-finalize” option
    • If you collect and view results on different machines, copy the result directory. You can pack the results into archive to avoid additional configuration, if results were finalized.
    • Open the result on GUI. If the results were collected without finalization, configure search paths prior to opening the result.

     

     

    What's new? Intel® SDK for OpenCL™ Applications 2016, R3

    $
    0
    0
    • Support for 7th Generation Intel® Core™ Processors on Microsoft Windows* and Linux* operating systems
    • Windows 10 Anniversary Update support
    • Yocto Project* support
      • These processors are supported as target systems when running the Apollo Lake Yocto BSP (other OSes are not supported for these processors): 7th Generation Intel® Pentium® Processor J4000/N4000 and 7th Generation Intel® Celeron® Processor J3000/N3000 Series for Desktop
      • Offline compiler support with GPU assembly code generation
      • Debug OpenCL™ kernels using the Yocto* GPU driver on host targets (6th and 7th Generation Intel® Core Processor)
    • OpenCL™ 2.1 and SPIR-V* support on Linux* OS
      • OpenCL 2.1 development environment with the experimental CPU-only runtime for OpenCL 2.1
      • SPIR-V generation support with Intel® Code Builder for OpenCL™ offline compiler and Kernel Development Framework including textual representation of SPIR-V binaries
    • New analysis features in Kernel Development Framework for Linux* OS
      • HW counters support
      • Latency analysis on 6th and 7th Generation Intel® Core™ Processors

    Sensor to Cloud: Connecting Intel® NUC to Amazon Web Services (AWS)*

    $
    0
    0

    Introduction

    This paper will show you how to use an Intel® NUC to connect sensors on an Arduino 101* (branded Genuino 101* outside the U.S.) to the Amazon Web Services (AWS)* IoT service. You’ll see how to read real-time sensor data from the Arduino 101, view it locally on the Intel® NUC, and send it to AWS IoT where the data can be stored, visualized and processed in the cloud. We’ll use Node-RED* on the Intel® NUC to create processing flows to perform input, processing and output functions that drive our application.

    Setup and Prerequisites:

    • Intel® NUC connected to the Internet and applicable software package updates applied
    • Arduino 101 connected to Intel® NUC through USB
    • Grove* Base Shield attached to Arduino 101 and switched to 3V3 VCC
    • Grove sensors connected to base shield: Light on A1, Rotary Encoder on A2, Button on D4, Green LED on D5, Buzzer on D6, Relay on D7
    • An active AWS cloud account and familiarity with the AWS IoT service

    Read Sensors and Display Data on Intel® IoT Gateway Developer Hub

    Log into the Intel® NUC’s IoT Gateway Developer Hub by web browsing to the Intel® NUC’s IP address and using gwuser as the default username and password. You’ll see basic information about the Intel® NUC including model number, version, Ethernet address, and network connectivity status.

    Click the Sensors icon and then click the Program Sensors button. This will open the Node-RED canvas where you’ll see Sheet 1 with a default flow for a RH-USB sensor. We won’t be using the RH-USB sensor so you can use your mouse to drag a box around the entire flow and delete it by pressing the Delete key on your keyboard. You’ll be left with a blank canvas.

    Along the left side of the Node-RED screen you’ll see a series of nodes. These are the building blocks for creating a Node-RED application on the Intel® NUC. We’ll use several nodes in this application:

    Read button presses Set LED indicator on/off
    Measure light level Format chart display on Intel® NUC
    Measure rotary position Send data to Intel® NUC's MQTT chart listener and to AWS IoT
    Control relay and buzzer   

    Drag and drop nodes onto the canvas and arrange them as shown below. For some of the nodes we'll need multiple copies. Use your mouse to connect wires between the nodes as shown. We'll make the connection to AWS IoT later so only one MQTT node is needed right now.

    When nodes are first placed on the canvas they are in a default state and need to be configured before they'll work. Nodes are configured by double-clicking them and setting parameters in their configuration panels.

    Double-click each node on the canvas and set its parameters as shown in the table below. In some cases the Name field is left blank to use the default name of the node. Pin numbers correspond to the Grove Base Shield jack where the sensor or actuator is connected.

    Node

    Parameters

    Grove Button

    Platform: Firmata, Pin: D4, Interval (ms): 1000

    Grove Light

    Platform: Firmata, Pin: A1, Unit: Raw Value, Interval (ms): 1000

    Grove Rotary

    Platform: Firmata, Pin: A2, Unit: Absolute Raw, Interval (ms): 1000

    Grove LED

    Platform: Firmata, Pin: D5, Mode: Output

    Grove Relay (upper)

    Platform: Firmata, Pin: D7

    Grove Relay (lower)

    Name: Grove Buzzer, Platform: Firmata, Pin: D6 (we'll use a relay node to control the buzzer instead of using the native Grove Buzzer node)

    chart tag connected to Grove Button

    Title: Button, Type: StatusText

    chart tag connected to Grove Light

    Title: Light, Type: Gauge, Units: RAW

    chart tag connected to Grove Rotary

    Title: Rotary, Type: Gauge, Units: RAW

    mqtt

    Server: localhost:1883, Topic: /sensors, Name: Charts

    Verify your settings and wiring connections, then click the Deploy button to deploy your changes and make them active on the Intel® NUC. After deploying the flow, you should see a data display towards the top of the Intel® IoT Gateway Developer Hub screen with live values for Rotary, Light and Button. Turning the rotary knob and covering the light sensor should make the numbers change up and down, and pressing the button should turn on the LED, sound the buzzer, and energize the relay.

    Configure AWS* IoT and Node-RED*

    1. Log into your AWS account and navigate to the AWS IoT console.
    2. Create a new device (thing) named Intel_NUC and a Policy named PubSubToAnyTopic that allows publishing and subscribing to any MQTT topic.
    3. Create and activate a new Certificate and download the private key file, certificate file, and root CA file (available here) to your computer.
    4. Attach the Intel_NUC device and PubSubToAnyTopic policy to the new certificate.

    While logged into the Intel® NUC via ssh as gwuser, create the directory /home/gwuser/awsiot and then use SFTP or SCP to copy the downloaded private key file, certificate file and root CA files from your workstation to the /home/gwuser/awsiot directory on the Intel® NUC.

    Connect Intel® NUC to AWS* IoT

    1. Drag a mqtt output node onto the Node-RED canvas and then double-click it.
    2. In the Server pick list select Add new mqtt-broker… and then click the pencil icon to the right.
    3. In the Connection tab, set the Server field to your AWS IoT endpoint address which will look something like aaabbbcccddd.iot.us-east-1.amazonaws.com. You can find the endpoint address by using the AWS CLI command aws iot describe-endpoint on your workstation.
    4. Set the Port to 8883 and checkmark Enable secure (SSL/TLS) connection, then click the pencil icon to the right of Add new tls-config… In the Certificate field enter the full path and filename to your certificate file, private key file, and root CA file that you copied earlier into the /home/gwuser/awsiot directory. For example, the Certificate path might look like /home/gwuser/awsiot/1112223333-certificate.pem.crt and the Private Key path might look like /home/user/awsiot/1112223333-private.pem.key. The CA Certificate might look like /home/gwuser/awsiot/VeriSign-Class-3-Public-Primary-Certification-Authority-G5.pem.
    5. Checkmark Verify server certificate and leave Name empty.
    6. Click the Add button and then click the second Add button to return to the main MQTT out node panel.
    7. Set the Topic to nuc/arduino101, set QoS to 1, and set Retain to false.
    8. Set the Name field to AWS IoT and then click Done.

    Send Data to AWS* IoT

    Drag a function node onto the Node-RED canvas. Double-click to edit the node and set the Name to Format JSON. Edit the function code so it looks like this:

    msg.payload = {
      source: "arduino101",
      rotary: Number(msg.payload),
      timestamp: Date.now()
      };
    return msg;

    Click Done to save the function changes. Draw a wire from the output of the Grove Rotary node to the input of Format JSON, and another wire from the output of Format JSON to the input of AWS IoT. These changes will convert the rotary angle measurements from the Grove Rotary sensor (connected to the Arduino 101) into a JSON object and send it to AWS IoT via MQTT. Click the Deploy button to deploy and activate the changes. The finished flow should look like this:

    Back in the AWS IoT console, start the MQTT Client tool and subscribe to the topic nuc/arduino101. You should see messages arriving once a second containing rotary sensor readings. Rotate the rotary sensor and observe the values changing in near real-time.

    When you're done testing this application be sure to stop your Node-RED flow (e.g. by turning off the NUC or removing the wire between Format JSON and AWS IoT and then re-deploying the flow) in order to avoid continuously sending MQTT messages to AWS IoT and consuming AWS IoT processing resources.

    Where to Go From Here

    This application provides the basic foundation for connecting your Arduino 101 and Intel® NUC to AWS IoT. From here you would typically wire up other sensors and send their data to AWS IoT, then build more complex applications that listen to AWS IoT messages and store, process and/or visualize the sensor data.

    Improve Performance of K-Means with Intel® Data Analytics Acceleration Library

    $
    0
    0

    How do banks identify risky loan applications or credit card companies detect fraudulent credit-card transactions? How do universities monitor students’ academic performance? How do you make it easier to analyze digital images? These are just a few situations that many companies face when dealing with huge amounts of data.

    To deal with risky loan or credit card problems, we can divide data into clusters of similar characteristics and look for abnormal behaviors when comparing this to other data in the same cluster. For monitoring students’ performance, school faculties can base on the students’ score range to sort them into different groups. Using a similar concept, we can partition a digital image into many segments based on a set of pixels that closely resemble each other. The idea is to simplify the representation of a digital image making it easier to identify objects and boundaries in an image.

    Dividing data into clusters or sorting students’ scores into different groups can be done manually if the amount of data is not large. However, it would be impossible to do it manually if data is in the range of terabytes and petabytes. Therefore machine-learning1 approach is a good way to solve these types of problems when dealing with large amount of data.

    This article discusses an unsupervised2 machine-learning algorithm called K-means3 that can be used to solve the above problems. It also describes how Intel® Data Analytics Acceleration Library (Intel® DAAL)4 help optimize this algorithm to improve the performance when running it on systems equipped with Intel® Xeon® processors.

    What is Unsupervised Machine Learning?

    In the case of supervised learning,5 the algorithm is exposed to a set of data in which the outcome is known so that the algorithm is trained to look for similar patterns in new data sets. In contrast, an unsupervised learning algorithm explores the data set with an unknown outcome. Further, input samples are not labeled and the system has to label them by itself. The system will scan the data and group the ones with similar characteristics/behaviors into what we call clusters. Basically, the system partitions the data set into clusters of similar characteristics and behaviors.

    What is K-means?

    K-means is an unsupervised machine-learning algorithm.

    In order to create clusters, K-means first assigns an initial value of centroids, normally by randomly selecting it. These are the centers for the clusters. Ideally these centers should be as far apart from each other as possible. Next, take the objects in the data set and associate them to the nearest centers to form the initial clusters. Then calculate the new centroids based on the newly formed clusters. Again, re-associate the objects base on these new centroids. Keep repeating the steps of recalculating new centroids and re-associating the objects until the centroid locations are no longer changed or the algorithm goes through all the specified iterations.

    The goal of the K-means algorithm is also to minimize the cost function J below. Sometimes, J is also called the objective or square-error function:

    Where:

    J = Square-error function

    xi = Object i

    cj = Centroid for cluster j

    k = Number of clusters

    nj= Number of objects in jth cluster

    || xi– cj || = Euclidean distance8 between xi and cj

    Figures 1–5 show how the K-means algorithm works:


    Figure 1. Data set layout showing the objects of the data set are all over the space.


    Figure 2. Shows the initial positions of the centroids. In general, these initial positions are chosen randomly, preferably as far apart from each other as possible..


    Figure 3. Shows new positions of the centroids after one iteration. Note that the two lower centroids are re-adjusted to be closer to the two lower chunks of objects


    Figure 4. Shows the new positions of the centroids after many iterations. Note that the positions of the centroids don’t vary too much compared to those in Figure 3. Since the positions of the centroids are stabilized, the algorithm will stop running and consider those positions final.


    Figure 5. Shows that the data set has been grouped into three separate clusters.

    Applications of K-means

    K-means can be used for the following:

    • Clustering analysis
    • Image segmentation in medical imaging
    • Object recognition in computer vision

    Advantages and Disadvantages of K-means

    The following lists some of the advantages and disadvantages of K-means.

    • Advantages
      • It’s a simple algorithm.
      • In general it’s a fast algorithm except for the worst-case scenario.
      • Work best when data sets are distinct and well separate from each other.
    • Disadvantages
      • Requires an initial value of k.
      • Cannot handle noisy data and outliers.
      • Doesn’t work with non-linear data sets.

    Intel® Data Analytics Acceleration Library (Intel® DAAL)

    Intel DAAL is a library consisting of many basic building blocks that are optimized for data analytics and machine learning. These basic building blocks are highly optimized for the latest features of the latest Intel® processors. More about Intel DAAL can be found at 4. The K-means algorithm is supported in Intel DAAL. In this article, we use PyDAAL, the Python* API of Intel DAAL, to invoke K-means algorithm,. To install PyDAAL, follow the instructions in 6.

    Using the K-means Algorithm in Intel Data Analytics Acceleration Library

    This section shows how step-by-step how to use the K-means algorithm in Python7 with Intel DAAL.

    Do the following steps to invoke the K-means algorithm from Intel DAAL:

    1. Import the necessary packages using the commands from and import
      1. Import the necessary functions for loading the data by issuing the following command:
        from daal.data_management import FileDataSource, DataSourceIface
      2. Import the K-means algorithm and the initialized function ‘init’ using the following commands:
        import daal.algorithms.kmeans as kmeans
        from daal.algorithms.kmeans import init
    2. Initialize to get the data. Assume getting the data from a .csv file
      dataSet = FileDataSource(
          dataFileName,
          DataSourceIface.doAllocateNumericTable,
          DataSourceIface.doDictionaryFromContext
        )

      Where dataFileName is the name of the input .csv data file
    3. Load the data into the data set object declared above.
      dataSet.loadDataBlock()
    4. Create an algorithm object for the initialized centroids.
      init_alg = init.Batch_Float64RandomDense(nclusters)
      Where Nclusters is the number of clusters.
    5. Set input.
      init_alg.input.set(init.data, dataSet.getNumericTable())
    6. Compute the initial centroids.
      initCentroids_ = init_alg.compute().get(init.centroids)
      Where initCentroids is the initial value of centroids.
      Note: The above initCentroids value is randomly calculated using the above function, Batch_Float64RandomDense. Users can also assign a value to it.
    7. Create an algorithm object for clustering.
      cluster_alg = kmeans.Batch_Float64LloydDense(nclusters, nIterations)
    8. Set input.
      cluster_alg.input.set(kmeans.data, dataSet.getNumericTable())
      cluster_alg.input.set(kmeans.inputCentroids, initCentroids)
    9. Compute results.
      result = cluster_alg.compute()

      The results can be retrieved using the following commands:
      self.centroids_ = result.get(kmeans.centroids)
      self.assignments_ = result.get(kmeans.assignments)
      self.goalfunction_ = result.get(kmeans.goalFunction)
      self.niterations_ = result.get(kmeans.nIterations)

    Conclusion

    K-means is one of the simplest unsupervised machine-learning algorithms that is used to solve the clustering problem. Intel DAAL contains an optimized version of the K-means algorithm. With Intel DAAL, you don’t have to worry about whether your applications will run well on systems equipped with future generations of Intel Xeon processors. Intel DAAL will automatically take advantage of new features in new Intel Xeon processors. What you need to do is link your applications to the latest version of Intel DAAL.

    References

    1. Wikipedia – machine learning

    2. Unsupervised learning

    3. K-means clustering

    4. Introduction to Intel DAAL

    5. Supervised learning

    6. How to install Intel’s distribution for Python

    7. Python website

    8. Euclidean distance

    Getting Started with Intel® IoT Gateways - Python

    $
    0
    0

    This article was written for Python*. To get started using JavaScript*, see Getting Started with Intel® IoT Gateways - Python

    Introduction

    The Internet of Things (IoT) market is booming. Gartner forecasts that in 2016, 6.4 billion connected things will be in use worldwide, and support total services spending of $235 billion. They went on to predict that the number of connected things will reach 20.8 billion by 2020. The International Data Corporation (IDC) estimates the IoT market will reach $1.7 trillion in 2020. When it comes to connecting new and legacy systems to the Internet, there has never been a better time.

    Internet of Things (IoT) solutions have a number of moving parts. At the heart of all these solutions sits the IoT gateway, providing connectivity, scalability, security, and management. This getting started guide will help you understand:

    • What an IoT gateway is.
    • How it can be used as the hub of commercial and residential IoT solutions.
    • How to choose the right gateway for you.
    • What software development tools are available.
    • How to write a “Hello World” application that will run on your gateway.

    Let’s get started!

    What is an IoT Gateway?

    What is a Gateway For?

    An IoT gateway is the heart of an IoT solution that ties all of the pieces together. On one side we have all the “things”, which can include sensors, smart devices, vehicles, industrial equipment, or anything that can be made to be “smart”, and produce data. On the other side we have network and data infrastructure, which stores and processes all the data the things produce. Gateways are the connection between the things and the network and data infrastructure; they are the glue that holds it all together.

    Figure 1: A high-level overview of an IoT solution

    However, gateways are so much more than just the glue; they are a solution to the connectivity challenge that all developers face. The connectivity challenge has two major needs:

    1. Robust and secure access to the Internet or Wide Area Network (WAN).
    2. Ability to support access to a multitude of devices, many of which have limited to no processing capability.

    Connecting a single sensor to the Internet can get complicated and expensive. What happens when you have varying types of sensors, each with a different type of interface, and want to aggregate the data into a single interface?

    Gateways help overcome these challenge by providing:

    1. Communications and connectivity
    2. Scalability
    3. Security
    4. Manageability

    Communications and Connectivity

    Wired and wireless connectivity are standard on many devices. Protocols you’ll find in use include Cellular 2G/3G/4G, Bluetooth*, Serial, USB, Virtual Private Network (VPN), Wi-Fi Access Point, MQ Telemetry Transport (MQTT) messaging protocol, and ZigBee*. These enable you to connect to sensors and systems using a variety of methods. With such a wide range of available protocols, you’ll be hard pressed to find a sensor or device you can’t connect to.

    Scalability

    Much like network routers, IoT gateways can be connected together to form even larger solutions. Whether you’re creating a home automation system with a single gateway, or have a multiacre industrial facility with new and legacy systems that need connecting, you can connect everything together into a single system.

    Security

    Data encryption and software lockdown are a couple of the security features you’ll find on IoT gateways. Additionally, many devices offer whitelisting, change control, secure storage, and secure boot, as well as a wide array of protocols, services, and malware protection. All of these features combine to ensure that your systems and data are kept secure at all times (a critical aspect, as the IoT continues to expand at exponential rates, becoming an even greater target for hackers and thieves).

    Manageability

    Manageability refers to the deployment, maintenance and management of your solutions. With the potential of complexity, simplicity in management is key. To help you’ll find web-based interfaces you can securely access to maintain the gateway itself, manage the connected sensors, and control how data flows through it. Many gateways use some form of embedded linux*, so the admin tools you know and love like ssh and scp are available.

    Usage Scenarios

    Commercial

    In a commercial setting, a gateway can connect a series of sensors (for example: light, temperature, smoke, energy, RFID) and systems (for example: HVAC, vending, security, transportation) to control and monitoring devices such as data stores and servers to be retrieved by laptops, tablets, and smart watches.

    Figure 2: An Intel example of an end-to-end commercial IoT deployment

    Specific examples include:

    • Commercial trucking companies collecting GPS and loading information from their fleets. Each truck has an Internet-connected gateway which filters and relays data from the truck’s systems.
    • Construction companies monitoring the noise levels on their sites in order to comply with local noise regulations. Each site has noise and vibration sensors connected to one or more gateways which send the data to the onsite supervisors.

    Residential

    The most common residential application of IoT is home automation. In this scenario, a gateway helps provide a single point of control by intelligently connecting your security system, thermostat, lighting controls, smoke detector, and more. Typically, a web interface accessed from within your home or securely over the Internet provides a unified view of all these systems.

    Smart meters are another example in common use today; they detect energy consumption information and send it back to the electric company as frequently as every hour, sometimes in even shorter intervals. These meters also allow for two-way communication between the meter and the electric company.

    Which Gateway is Right for Me?

    With many options available on the market, which is right for you? ultimately that depends mainly on two factors:

    1. The type of Internet connectivity available (wired, wireless, cellular).
    2. The types of sensors you’ll be using and the types of interfaces they have (USB, serial, Bluetooth*).

    Intel has a large ecosystem of manufacturing partners which provide a variety of options. On the IoT section of the Intel® Developer Zone you’ll find two useful tools: the Solutions Directory and the Gateway Comparison Tool. Using both of these tools you’ll find solutions with the following features:

    Processors

    • Single-core Intel® Quark™ SoC X1000 400 MHz processors
    • Single, dual, and quad-core Intel® Atom™ processors
    • Single, dual, and quad-core Intel® Core™ processors

    Networking and Communications

    • Wi-Fi (single and multiple-radio)
    • Dual LAN
    • Bluetooth*
    • CAN bus
    • ZigBee*
    • 6LoWPAN
    • GPRS
    • 2G/3G/LTE
    • Analog and digital I/O
    • RS-232

    Operating Systems

    • Wind River* linux* 7
    • Snappy Ubuntu* Core
    • Microsoft Windows® 10 IoT

    This guide was written using the Advantech* UTX-3115 gateway with an OMEGA* RH USB sensor which measures temperature and relative humidity.

    Industry Verticals Applying IoT Technology

    We’ve seen a number of applications of IoT gateway technology in both the commercial and residential sectors, but where specifically can this technology be applied?

    Here’s a partial breakdown of industries and the verticals where IoT technology is being applied:

    IndustryVerticalsExample Use Cases
    Public SectorCitiesCity Wi-Fi, parking, traffic
    Public SafetySchools, border, law enforcement 
    ManufacturingFactoriesEnergy management, security, automation
    Energy and MineralsUtilitiesMobile workforce, substation and distribution automation
    Oil and GasPipeline monitoring, refinery systems, secure operations 
    MiningAsset visibility and monitoring, predictive maintenance 
    TransportationTransportationRoadways, trains, stations
    Business to Consumer (B2C)RetailRemote expert / mobile adviser, digital media store experience
    Sports and EntertainmentStadium, stadium Wi-Fi, stadium vision 
    SP & Machine-to-Machine (M2M)Remote tower management, fleet/asset management 
    HealthcareVirtual patient observation, patient wayfinding 
    Financial Services IndustryIn-branch customer experience, energy management 

    Software Overview

    Wind River* linux* in the Context of Python*

    The operating system of the Intel® IoT Gateway is Wind River* linux*, a commercial embedded linux distribution. Because it’s linux, you can run just about anything on it, including Python*. In fact, the latest version of the Intel® IoT Gateway comes with Python 2.7.3 preinstalled. In addition, you can download updated Python version packages and other applications from the Intel Open Source Technology Center, or using the built-in IoT Gateway Developer Hub that’s running on the gateway.

    MRAA / UPM

    MRAA (pronounced em-rah) is a low-level library written in C. The purpose of MRAA is to abstract the details associated with accessing and manipulating the basic I/O capabilities of a platforms, into a single, concise API. MRAA serves as a translation layer on top of the linux General Purpose Input/Outputs (GPIO) facilities. Although linux provides a fairly rich infrastructure for manipulating GPIOs, and its generic instructions for handling GPIOs are fairly standard, it can be difficult to use. Having said that, you can use MRAA to communicate with both analog and digital devices. Be sure to check out the MRAA API Documentation.

    To install MRAA on your gateway, download the latest version from the Intel Open Source Technology Center using curl, and then use the rpm command to install. As an example, if you’re running the current system version - 7.0.0.13 - the commands would be as follows:

    >> curl -O https://download.01.org/iotgateway/rcpl13/x86_64/libmraa0-0.8.0-r0.0.corei7_64.rpm
    >> rpm -ivh libmraa0-0.8.0-r0.0.corei7_64.rpm

    IDEs and Text Editors

    There are a number of IDE’s and text editors available on the market today. If you already have a favorite, go ahead and use that. Otherwise, below are three available options to choose from.

    PyCharm* from JetBrains

    Figure 3: PyCharm* from JetBrains

    From the makers of IntelliJ IDEA comes PyCharm*, a fully-featured IDE for developing Python applications. PyCharm packs a ton of features, including:

    • Intelligent Python assistance (code completion)
    • Support for web development frameworks including Django*, Flask, and Google App Engine* platform
    • Integration with Jupyter Notebook
    • Ability to run, debug, test, and deploy applications on remote hosts or virtual machines
    • A wealth of built-in developer tools
    • Even more!

    PyCharm comes in two flavors: a free community edition and a more fully-featured professional edition available via a subscription model.

    Eclipse* and PyDev

    Figure 4: Eclipse* and PyDev

    The Eclipse* IDE is an open-source platform that provides an array of convenient and powerful code editing and debugging tools. PyDev is a Python IDE for Eclipse, which can be used for Python, Jython and IronPython development. like PyCharm, PyDev is a fully-featured IDE that includes:

    • Django integration
    • Code completion
    • Type hinting
    • Refactoring
    • Debugger with remote debugging capability
    • Application performance analysis using PyVmMonitor
    • Even more!

    Per the PyDev website, the best way to obtain PyDev is to download and install liClipse.

    Sublime Text

    Figure 5: Sublime Text

    If you prefer something more lightweight yet still very powerful then a text editor is the way to go. Sublime Text is one such editor. Supporting most programming languages, a handful of featured you’ll see in Sublime Text are:

    • Split window editing– edit files side by side, or edit two locations in the one file.
    • Distraction free mode– full screen, chrome-free editing, with nothing but your text in the center of the screen.
    • Instant project switch– instantly switch between projects.
    • Command Palette– search for what you want, without ever having to navigate through the menus or remember obscure key bindings.
    • Plugin API – powerful, Python-based plugin API with a built-in Python console to interactively experiment in real-time.

    Although Sublime Text is not free, the $70 cost may be worth the investment.

    Development Environment

    There are very rich development environments built around the Intel® IoT Gateways. These include both desktop and web-based tools. In this section of the guide you’ll learn how to flash the gateway’s operating system, and then program and debug a ‘Hello World’ application using the PyCharm IDE.

    These instructions will work on Windows, Mac and linux.

    What You Need to Get Going

    In order to begin developing on your gateway you’ll need:

    • An IoT gateway (for this guide we used the Advantech UTX-3115).
    • A USB sensor that measures temperature and relative humidity (for this guide we used the OMEGA Temperature and Humidity USB Sensor).
    • Power
    • An Ethernet cable to plug into your router (this is how the gateway will reach the Internet).
    • An IDE or text editor (for this example, we will use the PyCharm IDE).

    In order to connect to the gateway itself you’ll need network connectivity. This guide assumes that the gateway is sitting on the same network as your development computer. If network connectivity is unavailable for some reason, you can connect to the gateway via a serial terminal.

    Figure 6 shows the setup used in writing this guide:

    Figure 6: IoT Solution Diagram

    Getting Started / Hello World

    Flashing the OS

    In order to upgrade to the latest system version, you’ll need to flash the OS. To do so, follow the steps below:

    1. Obtain a USB drive that is at least 4 GB in size.
    2. Download the latest Gateway OS image.
    3. Unzip the image to a location of your choice on a linux host system.
    4. Open a terminal window.
    5. Use the df command to verify the device on which the usb drive is mounted. The df command shows the amount of available disk space being used by file systems.
    6. Use the following command to copy the OS image to the USB drive:
      sudo dd if= <path to recovery image file> of=/dev/sdb bs=4M; sync
    7. Unplug the USB drive from the host machine and boot the gateway from the USB drive.
    8. Log in to the gateway (root/root) and execute the following command:
      # /sbin/deploytool –d /dev/sda --reset-media -F
    9. Power off the gateway and then power it back on.

    When the gateway comes back up, log in and verify that you are now running the latest version of the OS by logging in to the gateway and viewing the system version number on the dashboard.

    First Time Setup

    Now that you can log into the gateway, set it up by following the steps below:

    XXXX is the last four digits of the MAC address of the gateway’s wireless network adapter (br-lan). You can find this MAC address by booting the gateway, and once it’s up, log in as root and type ‘ifconfig’. Get the last 4 digits from the br-lan adapter.

    1. Unpack the gateway.
    2. Plug the Ethernet cable from your router into the necessary Ethernet port. For the Advantech, use the right side (eth0) Ethernet port.
    3. Connect a VGA or HDMI monitor.
    4. Optional – connect a mouse and keyboard. (The example in this guide uses a USB hub to plug both into the Advantech. The mouse is completely optional, though a keyboard is recommended).
    5. Connect the USB Sensor to the gateway.
    6. Connect the gateway to power and press the power button.
    7. Once the gateway boots, use your development computer’s wireless network adapter to connect to the gateway’s built-in Wi-Fi using the SSID and password.
      • SSID: IDPDK-XXXX
      • Password: windriveridp
    8. Open a browser on your PC – Google Chrome is recommended – and go to http://192.168.1.1. This will open the login page of the Intel® IoT Gateway Developer Hub, a web-based interface to manage sensors and prototype with visual programming tools.
    9. Log in using the following credentials:
      • Username: gwuser
      • Password: gwuser

    That’s it! You’re ready to develop.

    Programming Hello World Using PyCharm

    In this section we’ll create a basic “Hello World” application using PyCharm.

    1. After installing PyCharm, run it. On the splash screen select, “Create New Project”.
      Figure 7: PyCharm main menu
    2. Next, select the location of your project files and the Python interpreter you’ll be using, and click the “Create”button.
      Note: It is highly suggested to develop using the same version of Python that is installed on the gateway. By default, both Python 2.7.3 and 3.3.3 are installed on the gateway.
      Figure 8: Pycharm new project
    3. After you create your project, the project window appears. In this window, right-click the name of your project and select “New File” to create a new blank file. Name the file “hello_world.py”.
      Figure 9: Pycharm new project > new file
    4. Once the ‘hello_world.py’ file is created, add the following code and save the file:
      #!/usr/bin/env python
      # encoding: utf-8
      """The standard hello world script."""
      print "Hello World!"
    5. To ensure your Python script runs, right-click the file and select “Run ‘hello_world’”.
      Figure 10: Run “hello_world”
    6. Once the script runs you can view the output in the console that appears at the bottom of the IDE.
      Figure 11: hello_world output

    Next we want to run the script on the gateway. To do that we will first copy the file from our development computer to the gateway using a secure copy (SCP) client application on the development computer.

    1. First, download and install the latest version of WinSCP. When installing it, I recommend selecting the Commander user interface style, as it allows drag-and-drop transferring of files.
      Figure 12: WinSCP Setup
    2. At the end of the installation, choose to open WinSCP.
    3. Log in to the gateway by configuring a login in WinSCP. Ensure that the file protocol you are using is ‘SCP’. Your settings will look similar to this:
      Figure 13: Configure a Login with WinSCP
    4. With the connection created, next we need a folder for the script. Click the Login button to log in to the gateway. This will connect you to the root folder on the gateway and open the folder in the right-hand pane.
    5. Next, right-click the right pane and select New > Directory.
    6. Name the directory ‘apps’.
    7. Double-click on the apps directory to open it up and create another directory named “helloworld” using the same steps. Double-click on the “helloworld” directory to open it if it isn’t already open.
    8. In the left-hand pane of the WinSCP application, navigate to the folder where you create the hello_world.py file. Drag-and-drop the file to the right-hand pane to copy the file. You should see something like this:
      Figure 14: WinSCP hello_world.py file folder
    9. With the file on your gateway, on your development computer, open a command prompt and use the ssh command to securely connect to the gateway:
      ssh root@192.168.1.4
    10. Next, change into the helloworld directory you created and use the python command to run the file:
      cd apps\helloworld
      python hello_world.py
    11. You should see the output of the script:
      Figure 15: Script output

    Coding Basics

    Intel provides a number of how-to articles and videos on using the IoT Gateway on the IoT section of the Intel® Developer Zone. The following videos and articles will help you get started:

    Additionally, the following guides contain useful information and instructions:

    Debugging Hello World with PyCharm

    To debug your Hello World application, first open up the Hello World project you created in a previous step. Run it as before to ensure that everything is in working order, following the steps below:

    1. To debug the application, create a breakpoint on the line which you want debugging to start by clicking to the left of the code. When the breakpoint is created, you will see a solid red dot beside the line.
      Figure 16: hello_world.py breakpoint
    2. With the breakpoint created, click the Debug button in the top right corner (it’s the button that looks like a bug).
      Figure 17: Debugging hello_world.py
    3. PyCharm will run your Python application to the breakpoint at which the debug panel becomes available.
      Figure 18: Running hello_world.py to breakpoint
    4. The debug panel has a lot of useful information about the state of your program at which it is stopped, including all of the objects created thus far.
    5. Once you are finished with the debug panel, click the Resume Program Execution button on the left side of the debug panel, second from the top (it looks like a green arrow pointing to the right).
      Figure 19: Resume program execution of hello_world.py

    Where to Go From Here

    At this point, you’ve accomplished quite a bit! In this guide you’ve:

    • Learned how to select the gateway that’s appropriate for your application.
    • Discovered a number of IDE and text editor options.
    • Learned how to flash the gateway and upgrade it to the latest system version.
    • Set up the gateway for development.
    • Written a simple hello world application in Python, copied it to the gateway, ran it and debugged it.

    As a next step, read and implement the lessons in the articles and videos listed in the Coding Basics section. These will show you how to pull data from a sensor, as well as publish the data you capture. Once you have the data, there are a wealth of options to both analyze and visualize the data.

    Getting Started with Intel® IoT Gateways - JavaScript

    $
    0
    0

    This article was written for JavaScript*. To get started using Python*, see Getting Started with Intel® IoT Gateways - Python.

    Introduction

    The Internet of Things (IoT) market is booming. Gartner forecasts that in 2016, 6.4 billion connected things will be in use worldwide, and support total services spending of $235 billion. They went on to predict that the number of connected things will reach 20.8 billion by 2020. The International Data Corporation (IDC) estimates the IoT market will reach $1.7 trillion in 2020. When it comes to connecting new and legacy systems to the Internet, there has never been a better time.

    Internet of Things (IoT) solutions have a number of moving parts. At the heart of all these solutions sits the IoT gateway, providing connectivity, scalability, security, and management. This getting started guide will help you understand:

    • What an IoT gateway is.
    • How it can be used as the hub of commercial and residential IoT solutions.
    • How to choose the right gateway for you.
    • What software development tools are available.
    • How to write a “Hello World” application that will run on your gateway.

    Let’s get started!

    What is an IoT Gateway?

    What is a Gateway For?

    An IoT gateway is the heart of an IoT solution that ties all of the pieces together. On one side we have all the “things”, which can include sensors, smart devices, vehicles, industrial equipment, or anything that can be made to be “smart”, and produce data. On the other side we have network and data infrastructure, which stores and processes all the data the things produce. Gateways are the connection between the things and the network and data infrastructure; they are the glue that holds it all together.

    Figure 1: A high-level overview of an IoT solution

    However, gateways are so much more than just the glue; they are a solution to the connectivity challenge that all developers face. The connectivity challenge has two major needs:

    • Robust and secure access to the Internet or Wide Area Network (WAN).
    • Ability to support access to a multitude of devices, many of which have limited to no processing capability.

    Connecting a single sensor to the Internet can get complicated and expensive. What happens when you have varying types of sensors, each with a different type of interface, and want to aggregate the data into a single dashboard?

    Gateways help overcome these challenge by providing:

    • Communications and connectivity
    • Scalability
    • Security
    • Manageability

    Communications and Connectivity

    Wired and wireless connectivity are standard on many devices. Protocols you’ll find in use include Cellular 2G/3G/4G, Bluetooth*, Serial, USB, Virtual Private Network (VPN), Wi-Fi Access Point, MQ Telemetry Transport (MQTT) messaging protocol, and ZigBee*. These enable you to connect to sensors and systems using a variety of methods. With such a wide range of available protocols, you’ll be hard pressed to find a sensor or device you can’t connect to.

    Scalability

    Much like network routers, IoT gateways can be connected together to form even larger solutions. Whether you’re creating a home automation system with a single gateway, or have a multiacre industrial facility with new and legacy systems that need connecting, you can connect everything together into a single system.

    Security

    Data encryption and software lockdown are a couple of the security features you’ll find on IoT gateways. Additionally, many devices offer whitelisting, change control, secure storage, and secure boot, as well as a wide array of protocols, services, and malware protection. All of these features combine to ensure that your systems and data are kept secure at all times (a critical aspect, as the IoT continues to expand at exponential rates, becoming an even greater target for hackers and thieves).

    Manageability

    Manageability refers to the deployment, maintenance and management of your solutions. With the potential of complexity, simplicity in management is key. To help you’ll find web-based interfaces you can securely access to maintain the gateway itself, manage the connected sensors, and control how data flows through it. Many gateways use some form of embedded Linux*, so the admin tools you know and love like ssh and scp are available.

    Usage Scenarios

    Commercial

    In a commercial setting, a gateway can connect a series of sensors (light, temperature, smoke, energy, RFID) and systems (HVAC, vending, security, transportation) to control and monitoring devices such as data stores and servers to be retrieved by laptops, tablets and smart watches.

    Figure 2: An Intel example of an end-to-end commercial IoT deployment

    Specific examples include:

    • Commercial trucking companies collecting GPS and loading information from their fleets. Each truck has an Internet-connected gateway which filters and relays data from the truck’s systems.
    • Construction companies monitoring the noise levels on their sites in order to comply with local noise regulations. Each site has noise and vibration sensors connected to one or more gateways which send the data to the onsite supervisors.

    Residential

    The most common residential application of IoT is home automation. In this scenario, a gateway helps provide a single point of control by intelligently connecting your security system, thermostat, lighting controls, smoke detector, and more. Typically, a web interface accessed from within your home or securely over the Internet provides a unified view of all these systems.

    Smart meters are another example in common use today; they detect energy consumption information and send it back to the electric company as frequently as every hour, sometimes in even shorter intervals. These meters also allow for two-way communication between the meter and the electric company.

    Which Gateway is Right for Me?

    With many options available on the market, which is right for you? Ultimately that depends mainly on two factors:

    1. The type of Internet connectivity available (wired, wireless, cellular).
    2. The types of sensors you’ll be using and the types of interfaces they have (USB, serial, Bluetooth*).

    Intel has a large ecosystem of manufacturing partners which provide a variety of options. On the IoT section of the Intel® Developer Zone you’ll find two useful tools: the Solutions Directory and the Gateway Comparison Tool. Using both of these tools you’ll find solutions with the following features:

    Processors

    • Single-core Intel® Quark™ SoC X1000 400 MHz processors
    • Single, dual, and quad-core Intel® Atom™ processors
    • Single, dual, and quad-core Intel® Core™ processors

    Networking and Communications

    • Wi-Fi (single and multiple-radio)
    • Dual LAN
    • Bluetooth*
    • CAN bus
    • ZigBee*
    • 6LoWPAN
    • GPRS
    • 2G/3G/LTE
    • Analog and digital I/O
    • RS-232

    Operating Systems

    • Wind River* Linux* 7
    • Snappy Ubuntu* Core
    • Microsoft Windows® 10 IoT

    This guide was written using the Advantech* UTX-3115 gateway with an OMEGA* RH USB sensor which measures temperature and relative humidity.

    Industry Verticals Applying IoT Technology

    We’ve seen a number of applications of IoT gateway technology in both the commercial and residential sectors, however where specifically can this technology be applied?

    Here’s a partial breakdown of industries and the verticals where IoT technology is being applied:

    IndustryVerticalsExample Use Cases
    Public SectorCitiesCity Wi-Fi, parking, traffic
    Public SafetySchools, border, law enforcement
    ManufacturingFactoriesEnergy management, security, automation
    Energy and MineralsUtilitiesMobile workforce, substation and distribution automation
    Oil and GasPipeline monitoring, refinery systems, secure operations
    MiningAsset visibility and monitoring, predictive maintenance
    TransportationTransportationRoadways, trains, stations
    Business to Consumer (B2C)RetailRemote expert / mobile adviser, digital media store experience
    Sports and EntertainmentStadium, stadium Wi-Fi, stadium vision
    SP & Machine-to-Machine (M2M)Remote tower management, fleet/asset management
    HealthcareVirtual patient observation, patient wayfinding
    Financial Services IndustryIn-branch customer experience, energy management

    Software Overview

    Wind River* Linux* in the Context of Python*

    The operating system of the Intel® IoT Gateway is Wind River* Linux*, a commercial embedded Linux distribution. Because it’s Linux, you can run just about anything on it, including Python*. In fact, the latest version of the Intel® IoT Gateway comes with Python 2.7.3 preinstalled. In addition, you can download updated Python version packages and other applications from the Intel Open Source Technology Center, or using the built-in IoT Gateway Developer Hub that’s running on the gateway.

    MRAA / UPM

    MRAA (pronounced em-rah) is a low-level library written in C. The purpose of MRAA is to abstract the details associated with accessing and manipulating the basic I/O capabilities of a platforms, into a single, concise API. MRAA serves as a translation layer on top of the Linux General Purpose Input/Outputs (GPIO) facilities. Although Linux provides a fairly rich infrastructure for manipulating GPIOs, and its generic instructions for handling GPIOs are fairly standard, it can be difficult to use. Having said that, you can use MRAA to communicate with both analog and digital devices. Be sure to check out the MRAA API Documentation.

    To install MRAA on your gateway, download the latest version from the Intel Open Source Technology Center using curl, and then use the rpm command to install. As an example, if you’re running the current system version - 7.0.0.13 - the commands would be as follows:

    >> curl -O https://download.01.org/iotgateway/rcpl13/x86_64/libmraa0-0.8.0-r0.0.corei7_64.rpm
    >> rpm -ivh libmraa0-0.8.0-r0.0.corei7_64.rpm

    IDE(s)

    There are a number of IDE options available to developers - the Intel® XDK IoT Edition, Node-RED*, Wind River* Helix* App Cloud and Eclipse*. In addition, you can always ssh directly into your gateway and use vi to write your application, or scp to securely transfer your project to the gateway. If you’re just getting started, I recommend using either Node-RED or the Wind River Helix App Cloud.

    Intel® XDK IoT Edition

    Figure 3: Intel® XDK IoT Edition

    Use the InteXDK IoT Edition with Node.js* to create web interfaces, add sensors to your project, and work with the cloud. In addition to working with your gateway, you can also program your Intel® Edison and Galileo boards.

    Node-RED*

    Figure 4: Node-RED flow

    The official pitch for Node-RED* is that it’s a tool for “wiring together hardware devices, APIs and online services in new and interesting ways.” What it provides is browser-based flow editing built on top of Node.js*.

    Wind River* Helix* App Cloud

    Figure 5: Wind River* Helix* App Cloud

    Once you register your gateway on the Wind River* Helix* App Cloud, Cloud9* – a web-based IDE – becomes available. The great thing about the Helix App Cloud is that you can develop your application from anywhere, and once it’s ready you can instantly run it on your device.

    Development Environment

    There are very rich development environments built around the Intel® IoT Gateways. These include both desktop and web-based tools. In this section of the guide you’ll learn how to flash the gateway’s operating system, and then program and debug a ‘Hello World’ application using the Intel® XDK IoT Edition (desktop) and the Wind River Helix App Cloud (web-based).

    These instructions will work on both Windows and Mac.

    What You Need to Get Going

    In order to begin developing on your gateway you’ll need:

    • An IoT gateway (for this guide we used the Advantech UTX-3115).
    • A USB sensor that measures temperature and relative humidity (for this guide we used the OMEGA Temperature and Humidity USB Sensor).
    • Power
    • An Ethernet cable to plug into your router (this is how the gateway will reach the Internet).

    In order to connect to the gateway itself you’ll need network connectivity. This guide assumes that the gateway is sitting on the same network as your development computer. If network connectivity is unavailable for some reason, you can connect to the gateway via a serial terminal.

    Figure 6 shows the setup used in writing this guide:

    Figure 6: IoT Solution Diagram

    Getting Started / Hello World

    Flashing the OS

    In order to upgrade to the lastest system version, you’ll need to flash the OS. To do that, use these steps:

    1. Obtain a USB drive that is at least 4GB in size.
    2. Download the latest Gateway OS image.
    3. Unzip the image to a location of your choice on a Linux host system.
    4. Open a terminal window.
    5. Use the df command to verify the device on which the usb drive is mounted. The df command shows the amount of available disk space being used by file systems.
    6. Use the following command to copy the OS image to the USB drive:
      sudo dd if= <path to recovery image file> of=/dev/sdb bs=4M; sync
    7. Unplug the USB drive from the host machine and boot the gateway from the USB drive.
    8. Log in to the gateway (root/root) and execute the following command:
      # /sbin/deploytool –d /dev/sda --reset-media -F
    9. Power off the gateway and then power it back on.

    When the gateway comes back up, log in and verify that you are now running the latest version of the OS by logging in to the gateway and viewing the system version number on the dashboard.

    First Time Setup

    Now that you can log into the gateway, set it up by following the steps below:

    XXXX is the last four digits of the MAC address of the gateway’s wireless network adapter (br-lan). You can find this MAC address by booting the gateway, and once it’s up, log in as root and type ‘ifconfig’. Get the last 4 digits from the br-lan adapter.

    1. Unpack the gateway.
    2. Plug the Ethernet cable from your router into the necessary Ethernet port. For the Advantech, use the right side (eth0) Ethernet port.
    3. Connect a VGA or HDMI monitor.
    4. Optional – connect a mouse and keyboard. (The example in this guide uses a USB hub to plug both into the Advantech. The mouse is completely optional, though a keyboard is recommended).
    5. Connect the USB Sensor to the gateway.
    6. Connect the gateway to power and press the power button.
    7. Once the gateway boots, use your development computer’s wireless network adapter to connect to the gateway’s built-in Wi-Fi using the SSID and password.
      • SSID: IDPDK-XXXX
      • Password: windriveridp
    8. Open a browser on your PC – Google Chrome is recommended – and go to http://192.168.1.1. This will open the login page of the Intel® IoT Gateway Developer Hub, a web-based interface to manage sensors and prototype with visual programming tools.
    9. Log in using the following credentials:
      • Username: gwuser
      • Password: gwuser

    That’s it! You’re ready to develop.

    Programming Hello World with the XDK

    In this section we’ll create a JavaScript* “Hello World” application using the Intel® XDK IoT Edition.

    After installing and signing in to the XDK, click the Start a New Project button on the bottom left of the IDE. Under the Internet of Things Embedded Application section, click the Templates link, and then select Blank Template.

    Figure 7: Writing “Hello World” with the XDK – Select blank template

    After that, click the Continue button on the bottom right. Add a project name in the popup that opens, and then click the Create button to create your project.

    Figure 8: Writing “Hello World” with the XDK – New project name and location

    Once your project is open, select your gateway from IoT Device dropdown in the bottom left of the XDK. In this example, the gateway has been given an IP address of 192.168.1.9.

    Figure 9: Writing “Hello World” with the XDK – Select your gateway

    Note: In order for the XDK to automatically find your gateway, your development computer must be on the same subnet. If you have connected to the gateway using its built-in wireless router (as we did above), then you are on the same subnet as the gateway. If your computer is not on the same subnet, but you can ping the IP address of the gateway - for example, if your gateway is plugged in to your network using a network cable -  you can use the Add Manual Connection option from the dropdown to manually connect to your device.

    Next, in the editor, type the following code on line 5:
    console.log(“Hello World! This is the Intel XDK”); Your editor window should look like this:

    Figure 10: Writing “Hello World” with the XDK – Editor window

    Save your changes by selecting File > Save.

    With your application created, you need to upload the project to the gateway. To do so, click the Upload button, which is a downward facing arrow.

    Figure 11: Writing “Hello World” with the XDK – Uploading the project to the gateway

    Now that your application is on the gateway, you can run it. To do so, clicking the Run button. The run button has a green circle with a white arrow in it. This will run your application.

    Figure 12: Writing “Hello World” with the XDK – Run the application

    When the application runs you will see the output on the console.

    Figure 13: Writing “Hello World” with the XDK – Console output

    Programming Hello World Using Wind River® Helix™ App Cloud

    In this section we’ll create a JavaScript “Hello World” application using the Wind River Helix App Cloud.

    The first thing you need to do is create an account on App Cloud and register your device. To do that, login to your gateway, click the Administration image just under the dashboard, and under the Quick Tools section, click the Launch button underneath the App Cloud image. Follow the directions there to register your gateway.

    Note: The unique ID the gateway creates expires after 20 minutes, so you’ll want to verify your email address and log back in within that time period. If you miss your window, you can generate a new code and register at that point.

    Once logged into the App Cloud, click the Create new project button under the Application Projects section. On the popup that appears, enter a project name and select the JavaScript Hello World template.

    Figure 14: Writing “Hello World” with Wind River Helix – Create a new project

    Hit the OK button to create your project. Once created, click the Open button to open it in the Cloud9 editor. Once the editor opens, click the hello.js file in the workspace tab.

    If you want, change the text that will show up on the console. I updated the text to say, “Hello World. This is the Cloud9 IDE!”. Save your changes by going clicking File > Save. Your project is now ready to run! To do so, click the green Run button. The application is downloaded to your gateway and run by Node.js. We can see in the console on the bottom part of the editor that our project did indeed run on the gateway.

    Figure 15: Writing “Hello World” with Wind River Helix – Running “Hello World”

    To deploy this application to your gateway:

    • In the Cloud9 editor, download the project by selecting File -> Download Project
    • Scp the compressed file to your gateway. In my case the command was:
      scp ~/Downloads/HelloWorldTestOne.tar.gz root@192.168.1.4:/users/robertonrails
    • Use the tar command to uncompress the project.
      tar -zxvf HelloWorldTestOne.tar
    • Use node to run hello.js without the debugger.
      node --nodead_code_elimination --nolazy --nocrankshaft ./HelloWorldTestOne/hello.js

    Coding Basics

    Intel provides a number of how-to articles and videos on using the IoT Gateway on the IoT section of the Intel Developer Zone. The following videos and articles will help you get started:

    Additionally, the following guides contain useful information and instructions:

    Debugging Hello World with the XDK

    To debug your application using the XDK, first open up the Hello World application you created in a previous step. Re-upload the application to the gateway and run it to ensure that everything is working well.

    To debug the application, click the Debug button. The debug button is an image of a bug with a green arrow on it.

    Figure 16: Debugging “Hello World” with the XDK

    After you click the button, the debugger window will open.

    Figure 17: Debugging “Hello World” with the XDK – Application state

    You can use this window to see the current state of the application including all local variables and the call stack.

    To create a breakpoint, select the desired line. You will then see the breakpoint you’ve created in the Breakpoints section on the left of the debugger window.

    Figure 18: Debugging “Hello World” with the XDK – Creating a breakpoint

    When you run the debugger the application will stop at this point and you can debug your application.

    Debugging Hello World with the Wind River Helix App Cloud

    Let’s debug our Hello World application.

    To do so, the first thing we need to do is open our Hello World application. Next, open hello.js. After that, click the space to the left of our one line of code. This should add a red dot beside line 25. To debug the app, click the green Run button. The application will automatically stop and the debug panel will open on the right hand side of the editor.

    Figure 19: Debugging “Hello World” with the Wind River Helix

    In the image above the debugger area is expanded so we can see more of what’s going on. From here we can browse the current state of our application at the breakpoint we specified, including all local variables and the call stack. If we wanted to, we could also enter watch expressions.

    To resume the running of our application, either click the green arrow at the top of the debug window or hit F8 on your keyboard. The program will then resume and we’ll see our familiar message printed to the console.

    Where to Go From Here

    In this guide you’ve accomplished quite a bit:

    • Learned how to select the gateway that’s appropriate for your application.
    • Discovered a number of IDE and text editor options.
    • Learned how to flash the gateway and upgrade it to the latest system version.
    • Set up your gateway for development.
    • Written a simple hello world application in JavaScript, deployed it to the gateway, ran it and debugged it.

    As a next step, read and implement the lessons in the papers listed in the Coding Basics section . These papers will show you how to connect sensor output to cloud databases, as well as saving a copy of the gateway’s operating system and deploying it to additional gateways.


    How to install Windows 10 IoT Core on Intel Joule

    $
    0
    0

    During the last Intel® IDF in San Francisco, the Intel Joule board was presented supporting 3 different OS BSP: Ostro, Ubuntu and Windows 10 IoT Core.

    For the first two OS the images was published at IDF time. For Windows 10 IoT Core the public image and support has published in mid October.

    As all the other Windows 10 IoT Core images for other supported boards, the distribution is located in an unique site:

    www.WindowsOnDevices.com .

    I'll try to graphically describe the step by step procedure that Microsoft publish to prepare and flash the board.

    The previous link show the following page:

     

    Proof of Concept: A Vital Stage of Enterprise B2B App Development

    $
    0
    0

    As we discussed in our first article on B2B enterprise apps, to develop a successful app for the enterprise market, you’ll need to build long-term relationships with your customers—these are definitely not one-off downloads. In the discovery phase, you focused on identifying the two important enterprise customer types—the end user, and the check-writer—and gathering data from both types of customers to understand their pain points, which allowed you to put together a solid plan.

    Now you’re ready to build the first version of your product, or your proof of concept, which will allow you to further validate the market. Will your idea work? Will it meet your customers’ needs? To find out, you'll need to convince at least one of your established contacts to work with you more closely, converting that company into a reference customer, and gathering their insight as you build a proof of concept and begin to iterate.

    Find Your Reference Customers

    You talked to a lot of potential customers as you researched end-user and check-writer pain points. During that process, you likely met a few people who were as excited about your solution as you are. These are the folks to reach out to about the organization becoming a reference customer. Your ideal reference customer is an influencer in the industry you’re serving who has fairly typical needs around your solution and is willing to pilot your app. During the pilot, you will fine-tune the app based on their feedback. Once it’s to their liking, these organizations will serve as references to help you scale your solution to others in the industry.

    Ideas to consider:

    • Look for a reference customer—more could get messy if you need to build out specific features for any of them.
    • Offer your app for a nominal fee in exchange for their help and feedback.
    • Plan for the pilot to last 3-6 months to gain the most useful feedback.

    Leverage Your Champions

    Once an organization has agreed to the pilot, you will need to gather some additional requirements before you start to build. Engage your champion(s) within the organization to help you determine the minimum barriers to entry—companywide or industry requirements, such as data security, compliance, and compatibility that aren’t exactly features of your app, but need to be built in for the company to be able to adopt it. Your champions can help you answer these key questions:

    Who else needs to approve it?

    Your main client will connect you to additional decision makers within their company, such as legal, IT, cross-functional partners, and even additional off-label users who might interact with the product or use it in different ways. What will they be looking for? What role will they play in the final product or purchasing decision?

    What matters to these additional stakeholders?

    From your research in phase 1, you probably have a good understanding of which key features you should include in your first iteration, and which can wait. (If you don’t, work with the end-user and check-writer customers in this organization to find out.) At this point, you need to also identify any requirements these additional stakeholders might have. Are there specific rules or systems that must be used for data security? Does the organization have compliance needs that absolutely must be addressed in the proof of concept before they’ll even consider it?

    What objections might they have?

    A corollary to the above—be sure to listen for what objections each stakeholder might have. Does the legal department have concerns about how the product will be branded? Is IT worried about compatibility? These needs of various stakeholders may well be in conflict; be prepared to keep pressing for a solution—and don’t be shy about leveraging your champion, with her inside knowledge of the organizational culture and requirements—to help.

    POC vs MVP—What’s the Difference?

    Once you’ve deepened your relationship with your stakeholders and have a plan for the minimum required feature set, you're ready to build a proof of concept. But what exactly is that, and how is it different from a minimum viable product, or MVP?

    If you’re coming from the B2C or consumer market, the MVP is a very familiar idea. An MVP is a product that contains enough of the key features to fulfill the concept and to function, but is streamlined and minimized for the quickest possible build—so you can start getting real feedback from real customers as soon as possible.

    A proof of concept, or POC, operates under a similar theory, with a focus on speed and iteration. However, because you're now working with an enterprise B2B client, even your first simple prototype will need to be a lot more baked. There's a lot more on the line, in terms of investment, and you’ll need to be a lot more intentional about the features that are included.

    Let’s consider the example we used in the last enterprise B2B article, of an e-commerce portal for marketers to help them manage inventory, maintain presentations, and increase transactions. In building the POC, it's likely that you'll want to not only demonstrate key functionality, such as updating product content and tracking real-time availability, you may also need to test the connection to the existing POS system, or include metadata to get the analytics team on board.

    A Word About Custom Features

    In the consumer app world, you’d never create a specific feature set to serve the needs of a single customer. But in the enterprise world, you might need to do just that. Imagine that one of the biggest retailers in the world wanted to use your e-commerce portal—but required integration with a legacy system not used by the rest of the industry. Satisfying this giant might be your ticket to industrywide adoption, making the extra work of the custom build well worth it in the end.

    Iterate Until You Have a Product Your Reference Customer Loves

    Just like with an MVP, your proof of concept is just the beginning. Iteration is key. Your product has moved beyond theory and it's time for you, and your customers, to further refine your needs and requirements. Don't be surprised if some of your features change pretty significantly once they're implemented and tested—that's why the proof of concept phase is so important. Continue to work closely with your champion, and all of the related stakeholders, to create a product that truly serves your enterprise customers’ many needs.

    Fluid Simulation for Video Games (Part 21)

    $
    0
    0

    Download Article

    Download Fluid Simulation for Video Games (Part 21) [PDF 830KB]

    Recapitulation

    We want games to be fun and look pretty and plausible.

    Fluid simulation can augment game mechanics and enhance the aesthetics and realism of video games. Video games demand high performance on a low budget but not great accuracy. By low budget, I mean both computational and human resources: the game has to run fast, and it can’t take a lot of developer or artist time. There are many ways to simulate fluids. In this series, I explain methods well suited to video games: cheap, pretty, and easily written.

    If you want to simulate fluids in video games, you have to overcome many challenges. Fluid mechanics is a complicated topic soaked in mathematics that can take a lot of time to understand. It’s also numerically challenging—naïve implementations are unstable or just behave wrong—and because fluids span every point in space, simulating them costs a lot of computational resources, both processing and memory. (Although fluid simulations are well suited to running on a graphics processing unit [GPU], in video games, the GPU tends to be busy with rendering.) The easiest and most obvious way to improve numerical stability—use more and smaller time steps—adds drastically more computational cost, so other techniques tend to be employed that effectively increase the viscosity of the fluid, which means that in-game fluids tend to look thick and goopy. The most popular techniques, like smoothed particle hydrodynamics, aren’t well suited to delicate, wispy motions like smoke and flame. A simulation approach that could meet these challenges would help add more varieties of fluids, including flames and smoke, to video games.

    To meet these challenges, I presented a fluid simulation technique suited to simulating fine, wispy motion and that builds on a particle system paradigm found in any complete three-dimensional (3D) game engine. It can use as many CPU threads as are available—more threads will permit more sophisticated effects.

    This approach has many synergies with game development, including the following:

    • Game engines support particle systems and physics simulations.
    • CPUs often have unused cores.
    • Fluid simulation readily parallelizes.

    The approach I present uses the vortex particle method (VPM), an unusual technique that yields the following benefits:

    • It’s a particle-based method that can reuse an existing particle engine.
    • Vortex simulations are well suited to delicate, wispy motion like smoke and flame.
    • The simulation algorithm is numerically stable, even without viscosity, either explicit or implicit.

    This series explains the math and the numerics. I presented variations on the theme of particle-based fluid simulation and included a model of thermal convection and combustion so that the system can also model flames. I showed two numerical approaches—integral and differential—compared their relative merits, and presented a hybrid approach that exploits the benefits of each approach while avoiding their pitfalls. The result is a fast (linear in the number of particles) and smooth fluid simulation capable of simulating wispy fluids like flames and smoke.

    Despite its apparent success in certain scenarios, the VPM has significant limitations in other scenarios. For example, it has difficulty representing interfaces between liquids and gases, such as the surface of a pool of water. I took a brief detour into smoothed particle hydrodynamics (SPH) to explore one way you could join VPM and SPH in a common hybrid framework, but (to phrase it generously) I left a lot of room for improvement.

    This series of articles leaves several avenues unexplored. I conclude this article with a list of ideas that I encourage you to explore and share with the community.

    Part 1 and part 2 summarized fluid dynamics and simulation techniques. Part 3 and part 4 presented a vortex-particle fluid simulation with two-way fluid–body interactions that run in real time. Part 5 profiled and optimized that simulation code. Part 6 described a differential method for computing velocity from vorticity. Part 7 showed how to integrate a fluid simulation into a typical particle system. Part 8, part 9, part 10, and part 11 explained how to simulate density, buoyancy, heat, and combustion in a vortex-based fluid simulation. Part 12 explained how improper sampling caused unwanted jerky motion and described how to mitigate it. Part 13 added convex polytopes and lift-like forces. Part 14, part 15, part 16, part 17, and part 18 added containers, SPH, liquids, and fluid surfaces. Part 19 provided details on how to use a treecode algorithm to integrate vorticity to compute vector potential.

    Fluid Mechanics

    With the practical experience of having built multiple fluid simulations behind us, I now synopsize the mathematical and physical principles behind those simulations. Fluid simulation entails running numerical algorithms to solve a system of equations simultaneously. Each equation governs a different aspect of the physical behaviors of a fluid.

    Momentum

    The momentum equation (one version of which is the famous Navier-Stokes equation) describes how momentum evolves and how mass moves. This is a nonlinear equation, and the nonlinearity is what makes fluids so challenging and interesting.

    Vorticity is the curl of momentum. The vorticity equation describes where fluid swirls—where it moves in an “interesting” way. Because vorticity is a derivative of momentum, solving the vorticity equation is tantamount to solving the momentum equation. This series of articles exploits that connection and focuses numerical effort only where the fluid has vorticity, which is usually much sparser than where it has momentum. The vorticity equation also implicitly discards divergence in fluids—the part that relates to the compressibility of fluids. Correctly dealing with compressibility requires more computational resources or more delicate numerical machinations, but in the vast majority of scenarios pertinent to video games, you can neglect compressibility. So the vorticity equation also yields a way to circumvent the compressibility problem.

    Advection describes how particles move. Both the momentum equation and the vorticity equation have an advective term. It’s the nonlinear term, so it’s responsible both for the most interesting aspects of fluid motion and the most challenging mathematical and numerical issues. Using a particle-based method lets us separate out the advective term and handle it by simply moving particles around according to a velocity field. This makes it possible—easy, in fact—to incorporate the VPM into a particle system. It also lets the particle system reuse the velocity field both for the “vortex particles” the fluid simulation uses and for propagating the “tracer” particles used for rendering visual effects.

    The buoyancy term of the momentum and vorticity equations describes how gravity, pressure, and density induce torque. This effect underlies how hot fluids rise and so is crucial for simulating the rising motion of flames and smoke. Note that the VPM simulation technique in this series did not model pressure gradients explicitly but instead assumed that pressure gradients lie entirely along the gravity direction. This supposition let the fluid simulate buoyancy despite not having to model pressure as its own separate field. To model pressure gradients correctly, you must typically model compressibility, which, as mentioned elsewhere, usually costs a lot of computational resources. So by making the simplifying assumption that the pressure gradient always lies along the gravity direction, you see a drastic computational savings. Computing density gradients require knowing the relationship between adjacent particles. In this series, I presented two ways to solve this: a grid-based approach and a particle-based approach. The grid-based approach directly employs a spatial partitioning scheme that is also used in computing viscosity effects. The particle-based approach uses a subset of the algorithms that SPH uses. Both can yield satisfactory results, so the decision comes down to which approach costs less.

    The stress and strain terms in the momentum and vorticity equations describe how pressure and shears induce motion within a fluid. This is where viscosity enters the simulation. Varying the magnitude and form of viscosity permits the simulation to model fluids ranging from delicate, wispy stuff like smoke or thick, goopy stuff like oil or mucous. The fluid simulation in this series used the particle strength exchange (PSE) technique to exchange momentum between nearby particles. This technique requires that the simulation keep track of which particles are near which others—effectively, knowing their nearest neighbors. I presented a simplistic approach that used a uniform grid spatial partition, but others could work, and this is one of the avenues I encourage you to explore further.

    The stretch and tilt terms of the vorticity equation describe how vortices interact with each other over distance as a result of configuration. This is strictly a 3D effect, and it leads to turbulent motion. Without this effect, fluids would behave in a much less interesting way. The algorithms I presented compute stretch and tilt using finite differences, but others could work. At the end of this article, I mention an interesting side effect of this computation that you could use to model surface tension.

    Conservation of Mass

    The continuity equation states that the change in the mass of a volume equals inflow/outflow of mass through volume surfaces. As described earlier, the simulation technique in this series dodged solving that equation explicitly by imposing that the fluid is incompressible.

    Equation of State

    The equation of state describes how a fluid expands and contracts (and therefore changes density) as a result of heat. Coupled with the buoyancy term in the momentum equation, the equation of state permitted the algorithm to simulate the intuitive behavior that “hot air rises, and cold air sinks.”

    Combustion

    The Arrhenius equation describes how components of fluid transform: fuel to plasma to exhaust. It also describes how fluid heats up, which feeds into the equation of state to model how the fluid density changes with temperature, hence, causing hot air to rise.

    Drag

    Drag describes how fluids interact with solid objects. I presented an approach that builds on the PSE approach used to model viscosity: I treat fluid particles and solid objects in a similar paradigm. I extended the process to exchange heat, too, so that solid objects can heat or cool fluids and vice versa.

    Spatial Discretization

    Fluid equations operate on a continuous temporal and spatial domain, but simulating them on a computer requires discretizing the equations in both time and space. You can discretize space into regions, and those regions can either move (for example, with particles) or not (for example, with a fixed grid).

    As its name suggests, the VPM is a particle-based method rather than a grid-based method. The algorithm I presented, however, also uses a uniform grid spatial partition to help answer queries about spatial relationships, such as knowing which particles are near which others or which particles are near solid objects interacting with the fluid. Many spatial partitions are available and within each implementation are possible. For this article series, I chose something reasonably simple and reasonably fast, but I suspect that it could be improved dramatically, so I provide some ideas you can try at the end of this article.

    Note: Other discretizations are possible—for example, in a spectral domain. I mention this in passing so that curious readers know about other possibilities, but for the sake of brevity I omit details.

    Vortex Particle Method

    In this series, I predominantly employed the VPM for modeling fluid motion, but even within that method, you have many choices for how you implement various aspects of the numerical solver. Ultimately, the computer needs to obtain velocity from vorticity, and there are two mathematical approaches to doing so: integral and differential. Each of those mathematical approaches can be solved through multiple numerical algorithms.

    The integral techniques I presented are direct summation and treecode. Direct summation has asymptotic time complexity O(N2), which is the slowest of those presented but also the simplest to implement. Treecode has asymptotic time complexity O(N log N), which is between the slowest and fastest, and has substantially more code complexity than direct summation, but that complexity is worth the speed advantage. Besides those techniques, other options are possible that I did not cover. For example, multipole methods have asymptotically low computational complexity order but mathematically and numerically require much greater complexity.

    The differential technique I presented entails solving a vector Poisson equation. Among the techniques I presented, this has the fastest asymptotic run time, and the math and code are not very complex. Based on that description, it seems like the obvious choice, but there is a catch that- involves boundary conditions.

    Solving any partial differential equation entails imposing boundary conditions: solving the equations at the spatial bounds of the domain. For integral techniques, the simplest conditions are “open,” which is tantamount to having an infinite domain without walls. The simulation algorithm handles solid objects, including walls and floors, which should suffice to impose boundary conditions appropriate to whatever scene geometry interacts with the fluid, so imposing additional boundary conditions would be redundant.

    The Poisson solver I presented uses a rectangular box with a uniform grid. It’s relatively easy to impose “no-slip” or “no-through” boundary conditions on the box, but then the fluid would move as though it were inside a box. You could move the domain boundaries far from the interesting part of the fluid motion, but because the box has a uniform grid, most of the grid cells would have nothing interesting in them yet would cost both memory and compute cycles. So ideally you’d have a Poisson solver that supports open boundary conditions, which is tantamount to knowing the solution at the boundaries, but the Poisson solver is meant to obtain the solution and so is a cyclic dependency.

    To solve this problem, I used the integral technique to compute a solution at the domain boundaries (a two-dimensional surface), and then used the Poisson solver to compute a solution throughout the domain interior. This hybrid approach runs in O(N) time (faster than treecode) and looks better than the treecode solver results.

    Assessment

    The VPM works well for fire and smoke but does not work for liquid–gas boundaries. SPH works well for liquid–gas boundaries but looks viscous. My attempt to merge them didn’t work well, but I still suspect the approach has merit.

    Further Possibilities

    The techniques and code I presented in this series provide examples and a starting point for a fluid simulation for video games. To turn these examples into viable production code would require further refinements to both the simulation and the rendering code.

    Simulation

    Improvements to VPM

    I implemented a simplistic uniform grid spatial partitioning scheme. A lot of time is spent performing queries on that data structure. You could optimize or replace it, for example, with a spatial hash. Also, you could switch the per-cell container to a much more lightweight container.

    Although difficult, it’s possible to model those liquid–gas boundaries in the VPM. You could track surfaces by using level-sets, surface geometry to compute curvature, or curvature to compute surface tension and incorporate those effects into the vorticity equation. Computing curvature entails computing the Hessian, which is related to Jacobian, which is already used to compute strain and stress.

    The VPM has a glaring mathematical problem: It starts with a bunch of small particles that carry vorticity in a very small region—so small it’s tempting to think of them as points. Vorticity mathematically resembles a magnetic field, and you could draw an analogy between these vortex particles and tiny magnets. These magnets, however, would have only a single “pole,” which is both mathematically and physically impossible. Likewise, there is no such thing as a vortex “point.” If you had only one vortex point, it would be possible for a vortex field to have divergence, and this is neither mathematically possible nor physically meaningful. And yet, this simulation technique has exactly this problem. One way to solve the problem is to use vortex filaments—for example, topologically forming loops. The vortex loops, being closed, would have no net divergence. (See, for example, “Simulation of Smoke Based on Vortex Filament Primitives” by Angelidis and Neyret.) The filaments could also terminate at fluid boundaries, such as at the interfaces with solid objects. The most obvious example of that would be a rotating body: Effectively, the body has a vorticity and so vortex lines should pass through the body.

    Note: The article “Filament-Based Smoke with Vortex Shedding and Variational Reconnection” as presented at SIGGRAPH 2010 got that wrong: The authors had rotating bodies within a fluid, but their vortex filaments did not pass through those bodies. They seem to have corrected that error in subsequent publications, and the YouTube* videos that had the error are no longer visible.

    Other Techniques

    Because SPH is also a fluid simulation technique that uses particles, my intuition is that it should complement the VPM so that some hybrid could work for both wispy, and liquid or goopy fluids. I would not call my attempt successful, but I hope it inspires future ideas to unify those approaches. Even though my implementation failed, I suspect that the basic idea could still be made to work.

    This article series did not cover them, but grid-based methods work well in specialized cases, such as where potential flow is important, and for shallow-water waves. Similarly, spectral methods are capable of tremendous accuracy, but that is exactly what video games can forsake.

    Rendering

    In the code that accompanies these articles, most of the processing time goes toward rendering rather than simulation. That’s good news because the simplistic rendering in the sample code doesn’t exploit modern GPUs and so there’s plenty of opportunity to speed that up.

    The sample code performs several per-vertex operations, such as computing camera-facing quadrilaterals. That code is embarrassingly parallel, so a programmable vertex shader could execute it on the GPU quickly because the GPU has hundreds or thousands of processing units.

    It turns out, though, that adding more CPU cores to those routines that operate on each vertex doesn’t yield a linear speed-up, which suggests that memory bandwidth limits processing speed. Effectively, to speed up processing, the machine would need to access less memory. Again, a solution is readily available: Inside the vertex buffer, instead of storing an element per triangle vertex, store only a single element per particle. It could even be possible to transmit a copy of the particle buffer as is. Because you can control how the vertex shader accesses memory, that vertex buffer can be in any format you like, including the one the particle buffer has. This implies using less memory bandwidth.

    Note that the GPU would likely still need a separate copy of the particle buffer, even if its contents were identical to the particle buffer the CPU used. The reason is that those processors run asynchronously, so if they shared a buffer, it would be possible for the CPU to modify a particle buffer in the middle of the GPU accessing that data, which could result in inconsistent rendering artifacts. In that case, it might be prudent to duplicate the particle buffer. (Perhaps a direct memory access engine could make that copy, leaving the CPU unencumbered.) In contrast, the visual artifacts of rendering the shared particle buffer might be so small and infrequent that the user might not notice. It’s worth trying several variations to find a good compromise between speed and visual quality.

    For the fluid simulation to look like a continuous, dense fluid instead of a sparse collection of dots, the sample code uses a lot of tracer particles—tens of thousands, in fact. Arguably, it would look even better if it had millions of particles, but processing and rendering are computationally expensive—both in time and memory. If you used fewer particles of the same size, the rendering would leave gaps. If you increased the particle size, the gaps would close but the fluid would look less wispy—that is, unless the particles grew only along the direction that smaller particles would appear. There are at least three ways to approach this problem:

    1. Use volumetric rendering instead of particle rendering. Doing so would involve computing volumetric textures and rendering them with fewer, larger camera-facing quads that access the volumetric texture; the results can look amazing.
    2. Elongate tracer particles in the direction they stretch. One way to do that is to consider tracers as pairs, where they are initialized near each other and are rendered as two ends of a capsule instead of treating every particle as an individual blob. You could even couple this with a shader that tracks the previous and current camera transform and introduce a simplistic but effective motion blur; the mathematics are similar for both.
    3. Expanding on the idea in option 2, use even more tracers connected in streaks. For example, you could emit tracer particles in sets of four (or some N of your choice) and render those as a ribbon. Note, however, that rendering ribbons can be tricky if the particle cluster “kinks”; it can lead to segments of the ribbon folding such that it has zero area in screen space.

    About the Author

    Dr. Michael J. Gourlay works at Microsoft as a principal development lead on HoloLens* in the Environment Understanding group. He led the teams that implemented tracking, surface reconstruction, and calibration. He previously worked at Electronic Arts (EA Sports) as the software architect for the Football Sports Business Unit, as a senior lead engineer on Madden NFL* and original architect of FranTk* (the engine behind Connected Careers mode), on character physics and ANT* (the procedural animation system used by EA), on Mixed Martial Arts*, and as a lead programmer on NASCAR*. He wrote Lynx* (the visual effects system used in EA games worldwide) and patented algorithms for interactive, high-bandwidth online applications.

    He also developed curricula for and taught at the University of Central Florida, Florida Interactive Entertainment Academy, a top-rated interdisciplinary graduate program that teaches programmers, producers, and artists how to make video games and training simulations.

    Prior to joining EA, he performed scientific research using computational fluid dynamics and the world’s largest massively parallel supercomputers. Michael received his degrees in physics and philosophy from Georgia Tech and the University of Colorado at Boulder.

    Lessons from the “Other” Side: Duskers and the Intel® Level Up Contest

    $
    0
    0

    Download [PDF 1.77 MB]

    One of the greatest strengths in the independent game-development industry is the universal belief in trying something radically different. When they succeed, "indie" games don’t just tweak the edges—they blow up the boundaries, and go where their vision takes them, no matter what. So when indie game-developer Misfits Attic learned its game Duskers was the clear winner of the "Other" category in the 2016 Intel® Level Up Contest, company co-founders Tim and Holly Keenan felt elated and vindicated in equal measure.

    “Maybe it sounds weird,” Tim admitted, “but I was almost more proud of being in the ‘Other’ category than anything else. Winning in that category was really special for me.”

    Unlike many developers that face a long divide between submitting their game to a contest, and then taking it live, Tim was able to launch Duskers right after submitting it to the Intel® Level Up Contest judges, because it was already far enough along to get into production. But that wasn’t the only thing different about his own unique developer’s journey. While many of the contest entrants use a touch-screen for interaction, Duskers relies on the old command-line interface (CLI). Many of the winning games from this year’s competition feature lush, colorful landscapes, with soothing, bright colors; but Duskers is dark and bleak, befitting a game that takes place on derelict spaceships drifting in the cosmos. And where most commercial titles offer catchy, engaging music, Duskers relies on mysterious clanking and groaning noises to capture an eerie feeling of being alone and vulnerable.


    Figure 1:Limited visibility and lack of resources make exploration dangerous for the drones.

    You might be thinking that there’s no way a game like that could even get funded—how would you even pitch such a title? “I almost designed this game so I couldn’t pitch it,” Tim laughingly admitted. He believes that’s the beauty of indie gaming in a nutshell—it’s a game that you can’t easily explain, you can’t easily franchise, you won’t be exploring movie rights, and it won’t work on a console or mobile device. It’s just a great game that stays true to itself all the way through.

    Despite the complex backstory, the premise is fairly simple. Your job is to pilot a small group of drones as they remotely explore abandoned spaceships. Through typed-in commands you can power up, acquire scrap metal, fire off sensors, and go about your job. There are unseen enemies inhabiting the vessels, and you can’t fight them—you have to lock them in rooms or send them out an airlock. Players are completely alone—there are no other humans to interact with, and the drones have limited light to play across the landscape. Plus, the laws of physics apply—things deteriorate and run out of power. There is a strong feeling of isolation, of helplessness, of anxiety—and it’s all according to plan.


    Figure 2:Players can pull up an overview of the sector they are exploring.

    Misfits Attic Background Reveals “Other” Tendencies

    Tim Keenan graduated from Georgia Tech with a bachelor’s degree in computer science, but his heart has always been in computer graphics. “I knew at a young age that I wanted to do video games,” he recalled. “I thought computer graphics were super cool.”

    He quickly landed at Rainbow Studios*, then after a couple of years he moved on to DreamWorks Animation SKG*, where he worked on several films. His jobs ran the gamut between art and science—working on effects systems to create lush foliage in Madagascar, and creating beautiful fire for How to Train Your Dragon, among other tasks. Tim also took classes in screenwriting, did some improv comedy, and tried a little acting and directing. He credits that diverse background with giving him a better perspective on the creative process that goes into producing a title.

    His wife Holly is the other half of Misfits Attic. She has parlayed her fine-arts background and graphic design degree into UX and interaction design, which is her day job. Tim and Holly co-founded Misfits Attic in 2011, and their first game, A Virus Named TOM, did well enough to keep the lights on. It’s a quirky action-puzzle game with a co-op mode, “like trying to defuse bombs while people throw potatoes at you,” Tim said.


    Figure 3:The glowing red colors against a dark black background give battle sequences a special graphical appeal.

    That limited success led Tim to create Duskers. Holly was working full time, and stayed away from this new project. Tim had already noodled around with an idea for a game that would have as its core design-pillar that it was as close to a real experience as he could get. “I really wanted you to feel like you were actually there, and you were actually doing this thing,” he said. The design decisions that followed were giving him fits, but once he settled on the way to play the game, he felt freed up to pursue his concept. “Once I created this idea that you were a drone operator, all of the next decisions started to fall into place.”

    Duskers has sometimes been described as “roguelike” due to its role-playing aspect. The 1980 game Rogue is often credited with spawning an entire subgenre of role-playing games where players explore a dungeon or labyrinth. Tim says his inspiration came from there, but also from the movies The Road and Alien, and especially the game Capsule by Adam Saltsman and Robin Arnott. “Capsule did a lot with audio, and their sound design really made the game feel visceral and real,” he explained. “That really inspired me.”

    Going Against the Grain with a CLI

    Being part of the developer community, Tim got plenty of advice suggesting changes. He was told to put people in the game, because players would care more if avatars died. Friends suggested he go in the direction of a real-time strategy game, with the familiar “drag, select, right-click” model. But he stuck to his vision and kept going. “I didn’t want you to be playing the drone operator—I wanted you to feel like you were the drone operator.”

    The CLI gave the game a retro feel that definitely attracted debate. “My friends in the Bay Area—game designers who were very talented—would tell me, ‘I get it, but users aren’t going to get it,’ ” he recalled. “Maybe I was just being stubborn, but the more people told me to take it out, the more I wanted to keep the CLI in.” His feeling after that was almost liberating. If he’d given up a certain market-share, why not just keep pushing as hard as he could to realize the complete vision in order to make it worth the sacrifice?

    Figure 4:Players receive instruction through a dense, informative interface.

    The result is an amazing game that draws you in slowly. Your drones can power up by discovering generators, and they gather scrap as they find it, if you tell them to. There are unseen aliens out there, which your sensors can pick up, and once they find you, they will destroy you. At first commands don’t come quickly to mind, and the reference manual is required reading. But at some point, after playing awhile, the commands you need start to pour automatically out of your fingers, and you can string them together with efficiency. That’s when the game clicks. It’s a feeling you’d never get in a Triple-A title, but that sense of accomplishment makes Duskers a classic indie game.

    Building the Game

    For their first title, Misfits Attic chose Microsoft Xbox New Architecture* (Microsoft XNA*) for a game engine. Released in 2004 as a set of tools with a managed runtime environment, it was eventually superseded by the Microsoft Windows* Phone 7 Developer Tools in 2010. So Misfits Attic knew they needed a new engine for Duskers. The team (Tim, plus another programmer) also had experience with C#, which was compatible with Unity*, so Unity became the choice for a new engine. “There was a large, strong development community behind Unity, and we knew we could do cross-platform work easier. But it was a pain learning a new tool. Every game I work on, I have to work with new technology. It seems so rare that I can just reuse something.”

    By not writing his own game engine, Tim was free to push his own creativity. “As an indie, what I have to contribute is my design. I feel like I never end up pushing technology in any of my games, because I only have a limited amount of time to develop something. The existing technologies give me so much space to play in, that if I can’t work within those constraints, it’s not good. I want to spend the time iterating on the game design.”

    Unity also provided a path to port from the PC to the iOS* for Mac*, and a Linux* version. Tim came up against a few technical hurdles, especially with the interface. He knew he needed a good menu system, for example. Typically, games would use text boxes, but, for Duskers, classic text boxes didn’t seem to work, because they would drop letters when players were typing frantically to get commands started. The autocomplete function didn’t work right, either. Tim and his co-worker studied other games and, because they weren’t typical game designers, they figured they would just have to build the text function from scratch. They came up with their own, hand-built system with menu buttons, with the first character in brackets, so players could open a menu with that letter.

    The AI that drives the game turned out to be quite simple. “We had always intended to make everything a little bit smarter and a little more intelligent,” Tim said, but as the game construction went on, it just didn’t matter—the AI didn’t need to be that smart. For example, Tim doesn’t mind that the drones can periodically get hung up on a doorway. “It’s annoying, but it reminds you that they are just stupid little drones, and, to me, that made it so much more real,” he said. So he stopped trying to make the pathfinding perfect. To some users, that might be a show-stopper for a mass-market game. But for a leader in the “Other” category, it all made sense.

    Ready to Publish

    Raising money for a unique, independent project was never easy. Tim laughs as he recalls the pitch he’d make to producers for funding. “Okay, so there’s a command line,” he’d say to start.

    “So, no mobile offerings, no console ports…?” the audience would respond, not altogether positively.

    “Right,” Tim would say. “And it’s about feeling completely isolated, and it’s going to be hard to see things, and there’s not going to be any soundtrack, and no humans.”

    That may be intimidating to explain to a room full of experienced Triple-A game-producers, but it was fine for the Intel Level Up Contest. Tim had assumed that the contest was limited to touch-control games, but he found out that touch controls were only a “nice to have” feature. He had picked up some new Intel® hardware at a Steam* Development Days event, and he felt like he should return the favor and enter. But he had no idea what to expect.


    Figure 5: Duskers gives players a strong feeling of isolation, helplessness, and anxiety, thanks to screen-play dominated by black voids and a stark, almost random soundtrack. Conquering the game gives players a huge sense of satisfaction.

    “When we found out about our award in the ‘Best Other’ category, I tweeted out about it right away. My friends started joking around, saying ‘Oh, you made that Other game.’ But I really dug it.”

    Conclusion

    By staying with his vision and producing a game that defies easy description, Tim Keenan stayed true to his indie roots. Funding has been a challenge every step of the way, but he’s explored every alternative he could find and made it all work. He’s looking forward to some help from Intel for the next phase—including some troubleshooting on integrated graphics—while basking in the glow of winning a prestigious award. In addition, Intel sponsored him at the Design, Innovate, Communicate, Entertain (DICE) Summit in 2016, which he credits with altering his perspective on the gaming industry. “I’m incredibly grateful for all of Intel’s assistance,” he said. “They have supported me more than any other corporation.”

    Figure 6:Tim Keenan, left, with Mitch Lum of Intel at the PAX West 2016 conference in Seattle.

    Continued funding will be an issue, he acknowledged. “We were fortunate enough to get Indie Fund-ed for Duskers, and that in itself was an amazing experience,” he said. In his blog post detailing the experience, Tim credits independent game-developers that wanted him to succeed, saying he now has a “heavy indie karma debt” to repay.

    But the biggest lesson he learned is to stay true to the vision. “If you focus every decision around your artistic intent, you can actually convey that vision to players. At the end of the day, there’s a lot of financial pressure on indie game-developers,” he said. “If you don’t make money, you can’t keep doing what you love. But sometimes it’s riskier to not take risks—and especially in today’s climate, where consumers have so many choices, you have to stand out.”

    Resources

    Duskers main site: http://duskers.misfits-attic.com

    2016 Intel Level Up Contest: https://software.intel.com/en-us/blogs/2016/05/27/2016-intel-level-up-contest-by-the-numbers

    Unity Engine: https://unity3d.com

    Adaptive Screen Space Ambient Occlusion

    $
    0
    0

    Download Document  Download Code Samples

    This article introduces a new implementation of the effect called adaptive screen space ambient occlusion (ASSAO), which is specially designed to scale from low-power devices and scenarios up to high-end desktops at high resolutions, all under one implementation with a uniform look, settings, and quality that is equal to the industry standard.

    Screen space ambient occlusion (SSAO) is a popular effect used in real-time rendering to produce small-scale ambient effects and contact shadow effects. It is used by many modern game engines, typically using 5 percent to 10 percent of the frame GPU time. Although a number of public implementations already exist, not all are open source or freely available, or provide the level of performance scaling required for both low-power mobile and desktop devices. This is where ASSAO fills needed gaps.

    This article focuses on how to understand the sample code and to further integrate or port the sample code. It also covers implementation specifics, available options, settings and trade-offs in its use. An article detailing the implementation is featured in the upcoming book GPU Zen (GPU Pro* 8).


    Figure 1. Example of adaptive SSAO applied to a test scene in Unity 4*.

    Full DirectX* 11 implementation is provided under MIT license in an easy-to-integrate package.

    Algorithm Overview

    ASSAO is a SSAO implementation tuned for scalability and flexibility. Ambient occlusion (AO) implementation is based on a solid-angle occlusion model similar to “Horizon-Based Ambient Occlusion” [Bavoil et al. 2008] with a novel progressive sampling kernel disk. The performance framework around it is based on a 2 x 2 version of cache-friendly deinterleaved rendering, “Deinterleaved Texturing for Cache-Efficient Interleaved Sampling” [Bavoil 2014], and optional depth MIP-mapping “Scalable Ambient Obscurance” [McGuire et al. 2012].

    Scaling quality with respect to performance is achieved by varying the number of AO taps (enabled by the progressive sampling kernel) and toggling individual features at various preset levels.

    Stochastic sampling is used to share AO value between nearby pixels (based on rotated and scaling of the sampling disk) with a de-noise blur applied at the end. De-noise blur is edge-aware in order to prevent the effect bleeding into unrelated background or foreground objects, which causes haloing. Edges can be depth-only based, or depth and normal based. (The latter results in higher quality, but of course costs more in processing). This smart blur is performed in the 2 x 2 deinterleaved domain for optimal cache efficiency, with only the final pass done at full resolution during the interleaving (reconstruction) pass.

    In practice, it is a multi-pass pixel shader-based technique. At High preset, the main steps are:

    1. Preparing depths
      1. 2 x 2 deinterleave input screen depth into four quarter-depth buffers and convert values to viewspace. Also, if input screen normals are not provided, reconstruct them from depth.
      2. Create MIPs for each of the smaller depth buffers (not done in Low or Medium presets).
    2. Computing AO term and edge-aware blur for each of the four 2 x 2 deinterleaved parts
      1. Compute AO term and edges and store into a R8G8 texture.
      2. Apply edge-aware smart blur (one to six passes, based on user settings).
    3. Combine four parts into the final full resolution buffer and apply final edge-aware blur pass.

    The Highest/Adaptive quality preset has an additional base AO pass used to provide importance heuristics that guide the per-pixel variable sample count for the main AO pass.

    Table 1 gives an overview of performance numbers. These numbers are for reference and can vary based on driver and hardware specifics. Changing the effect settings will not change the performance, with the exception of edge-aware blur; increasing the blur pass count increases the cost.

     

    Skull Canyon (Iris Pro 580)

    GTX 1080

    RX 480

     

    1920 x 1080

    1920 x 1080

    3840 x 2160

    1920 x 1080

    3840 x 2160

    Low

    2.4

    0.28

    1.21

    0.64

    2.58

    Medium

    4.1

    0.49

    2.25

    1.01

    4.09

    High

    6.9

    0.77

    3.15

    1.34

    4.74

    Highest

    10.4

    1.12

    4.65

    2.07

    7.44

    Table 1. ASSAO effect cost in milliseconds at various presets, resolutions, and hardware.

    Profiled with screen normals provided, a two-pass blur and Highest adaptive target set to 0.45, getting the effect scaling (quality versus performance) between Low/Medium/High/Highest presets is done by varying the number of AO taps, as well as by toggling on/off individual features. Table 2 shows a detailed setup of these presets.

     

    Sample count

    2 x 2 deinterleaved

    Depth MIPs

    Edge-aware blur

    Low

    6

    yes

    no

    no

    Medium

    10

    yes

    no

    yes

    (depth only)

    High

    24

    yes

    yes

    yes+

    (depth + normals)

    Highest (Adaptive)

    10–64

    yes

    yes

    yes+

    (depth + normals)

    Table 2. Details of ASSAO presets.

    Sample Overview

    The sample uses DirectX 11 and is compatible with Windows* 7 64-bit and above, with Microsoft Visual Studio* 2015 to compile.


    Figure 2. Adaptive SSAO sample layout.

    The Crytek Sponza* scene included in the sample is used by default and the basic effect profiling metrics are shown in the upper-right graph. Below the graph, there are a number of dials used to change effect settings, quality, or debug the effect. The main settings are:

    1. Effect enabled

      Toggles the effect off/on. See screen images 0 (off), 1 (on).

    2. Effect radius

      Radius of ambient occlusion in viewspace units. See screen images 4, 5, 6.

    3. Effect strength

      Linear effect multiplier, useful for setting the effect strength in conjunction with addition to effect power, as well as fading the effect in/out. See screen images 7, 8, 9, 10.

    4. Effect power

      Exponential effect modifier: occlusion = pow(occlusion, effectPower). The best way to tweak the power of the effect curve. See screen images 11, 12, 13, 14.

    5. Detail effect strength

      Additional two-pixel wide kernel used to add a high-frequency effect. High values will cause aliasing and temporal instability. See screen images 15, 16, 17, 18, 19.

    6. Blur amount

      Higher number of blur passes produces a smoother effect with less high-frequency variation, which can be beneficial (reduces aliasing), but also increases cost. See screen images 20, 21, 22, 23, 24.

    7. Blur sharpness

      Determines how much to prevent blurring over distance based (and optional normal based) edges, used to prevent bleeding between separate foreground and background objects, which causes haloing and other issues. A value of 1 means fully sharp (no blurring over edges), and anything less relaxes the constraint. Values close to 1 are useful to control aliasing. See screen images 25, 26.

    8. Deferred Path

      In the deferred path, the inputs for the effect are screen depth and normalmap textures. Conversely, when forward path is used, only the depth texture is the input while the normalmap is reconstructed from the depth, which adds to the cost and produces slightly different results. See screen images 27, 28.

    9. Expand resolution

      Near the screen edges the part that the effect kernel plays lies outside of the screen. While different sampling modes (i.e., clamp/mirror) can be used to achieve different results (see m_samplerStateViewspaceDepthTap), the best solution is to expand the render area and resolution by a certain percentage while creating the depth buffer, so that the data needed by the AO effect near the edges is available. This option does that and ASSAO uses the optional scissor rectangle to avoid computing AO for the expanded (not visible) areas. See screen images 29, 30.

    10. Texturing enabled

      Toggles texturing to make the AO effect more visible (lighting is still applied). See screen images 31, 32.

    11. Quality preset

      Switches between the four quality presets, described in Tables 1 and 2. See screen images 33, 34, 35, 36.

      For the Highest/Adaptive preset, Adaptive target controls the progressive quality target that can be changed at runtime to quickly trade off quality versus performance. See screen images 37, 38, 39.

    12. Switch to advanced UI

      To debug the effect in more detail, the sample can be switched to advanced UI, which provides access to additional scenes (see screen images 40, 41, 42) and the development version of the effect, which allows for more in-depth profiling and various debug views to show normals (screen image 43), detected edges (screen image 44), all AO samples for a selected pixel (screen image 45), and adaptive effect heatmap (screen image 46).

    Integration Details

    For quick integration into a DirectX 11 codebase, only three files from the sample project are needed:

    Projects\ASSAO\ASSAO\ASSAO.h
    Projects\ASSAO\ASSAO\ASSAODX11.cpp
    Projects\ASSAO\ASSAO\ASSAO.hlsl

    These contain the whole ASSAO implementation with no other dependencies except the DirectX 11 API.

    The basic ASSAO integration steps are:

    1. Add ASSAO.h and ASSAODX11.cpp into your project.
    2. Add the ASSAO.hlsl file where it can be loaded or, alternatively, see “USE_EMBEDDED_SHADER” defined in ASSAOWrapperDX11.cpp (and the project custom build step) for details on how to easily embed the .hlsl file into the binary.
    3. Create an ASSAO_Effect object instance after DirectX 11 device creation by providing the ID3D11Device pointer and the shader source buffer to the static ASSAO_Effect::CreateInstance(…). Don’t forget to destroy the object using a call to ASSAO_Effect::DestroyInstance() before the DirectX device is destroyed.
    4. Find a suitable location in your rendering post-processing pipeline: SSAO is often applied directly onto the light accumulation or post-tonemap color buffers, before other screen-space effects, usually using multiplication blend mode. A more physically correct approach sometimes used is to render the AO term into a separate buffer for later use in the lighting pass. In any case, since the required inputs are the scene depth (and screen space normals, if available), it means that ASSAO can be drawn once those become available.
    5. Set up the per-frame inputs structure by filling in the ASSAO_InputsDX11:
      1. ScissorLeft/Right/Top/Bottom are only needed if the effect output needs to be constrained to a smaller rectangle, such as in the case when the Expand resolution approach is used. Otherwise, defaults of 0 indicate that the output goes into the whole viewport.
      2. ViewportX/Y must be set to 0 and ViewportWidth/Height to the output render target and input depth and screen space normals texture resolution. Custom viewports are not (yet) supported.
      3. ProjectionMatrix must be set to the projection used to draw the depth buffer. Both LH and RH projection matrices are supported, as well as the reversed Z (http://outerra.blogspot.de/2012/11/maximizing-depth-buffer-range-and.html).
      4. NormalsWorldToViewspaceMatrix (optional) is needed if the input screen space normals are not in the viewspace, in which case this matrix is used to convert them.
      5. MatricesRowMajorOrder defines the memory layout of the input ProjectionMatrix and NormalsWorldToViewspaceMatrix.
      6. NormalsUnpackMul and NormalsUnpackAdd default to 2 and -1 respectively, and are used to unpack normals into [-1, 1] range from the UNORM [0, 1] textures that they are commonly stored in. When normals are provided in a floating point texture, these two values need to be set to 1 (mul) and 0 (add).
      7. DrawOpaque determines the blending mode: if true, the contents of the selected render target will be overwritten; if false, multiplicative blending mode is used.
      8. DeviceContext (DirectX 11-specific) should be set to the ID3D11DeviceContext pointer used to render the effect.
      9. DepthSRV (DirectX 11-specific) should be set to input depth data.
      10. NormalSRV (DirectX 11-specific) should be set to input screen space normals or nullptr if not available (in which case normals will be reconstructed from depth data).
      11. OverrideOutputRTV (DirectX 11-specific) should be set to nullptr or to the output render target. If is it set to nullptr the currently selected RTV is used.
    6. Set up the effect settings structure defined in ASSAO_Settings. They are detailed in the Sample overview section.
    7. Call the ASSAO_Effect::Draw function. All current DirectX 11 states are backed up and restored after the call to ensure seamless integration.

    The following files from the sample project provide an example of integration:

    Projects\ASSAO\ ASSAOWrapper.h
    Projects\ASSAO\ ASSAOWrapper.cpp
    Projects\ASSAO\ ASSAOWrapperDX11.cpp

    The latest source code can be downloaded from https://github.com/GameTechDev/ASSAO.

    Citations

    [Bavoil et al. 2008] Bavoil, L., Sainz, M., and Dimitrov, R. 2008. "Image-Space Horizon-Based Ambient Occlusion.” In ACM SIGGRAPH 2008 talks, ACM, New York, NY, USA, SIGGRAPH ’08, 22:1–22:1.

    [McGuire et al. 2012] Morgan McGuire, Michael Mara, David Luebke. “Scalable Ambient Obscurance.” HPG 2012.

    [Bavoil 2014] Louis Bavoil, “Deinterleaved Texturing for Cache-Efficient Interleaved Sampling.” NVIDIA 2014.

    Notices

    This sample source code is released under the Intel Sample Source Code License Agreement.

    Viewing all 1201 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>