Quantcast
Channel: Intel Developer Zone Articles
Viewing all 1201 articles
Browse latest View live

Slow floating license checkout

$
0
0

The following issues can cause slow license checkouts:

  • Old license server information or license files.  Check the following places for invalid licenses and delete any found:
    • INTEL_LICENSE_FILE environment variable - make sure port@host is correct and any folders specified do not contain invalid license files.
    • For Linux: /opt/intel/licenses
    • For Windows: [Program Files]\Common Files\Intel\Licenses
  • A bug introduced with RHEL\CentOS 7.2.  This adds a 25 second delay to the floating license checkout when IPv6 is disabled.  More information here.
  • Running 2016 or newer versions of the compiler on a remote workstation.  A license caching feature available in the 2015 product version was disabled in the 2016 version.  More information here.

TBB flowgraph: using streaming_node

$
0
0

Introduction

The Intel® Threading Building Blocks (Intel® TBB) library provides a set of algorithms that enable parallelism in C++ applications. Since Intel® TBB 4.0, unstructured parallelism, dependency graphs and data flow algorithms can be expressed with flow graph classes and functions. The flow graph interface makes Intel® TBB useful for cases that are not covered by its generic parallel algorithms, while keeping users away from lower-level peculiarities of its tasking API.

Increasingly, systems are becoming heterogeneous and are starting to incorporate not only the power of CPUs but also different kinds of accelerators that are suitable for particular sets of tasks.

In an effort to better support heterogeneous solutions, async_node was added to the flow graph API to support parallel activities, external working threads (threads that are not in TBB thread pool), etc. The main limitation of async_node is that the result is returned to the graph at the same point. You cannot start an async_node activity in one place and return the async_node result at another point of the graph.

The problem described above can be resolved with another new Intel® TBB feature: async_msg. This concept is quite similar to the future/promise concept, a standard C++ feature, and it allows to get back to the graph in any point. You just need to pass the async message from the node where the async activity was started to the node where the async result is needed.

Moreover, Intel® TBB provides a special node with OpenCL support in it: opencl_node. The details can be found here: https://software.intel.com/en-us/blogs/2015/12/09/opencl-node-overview.

During the implementation of the node, we found that some concepts are quite generic and can be used for any heterogeneous APIs. For example, async_msg was developed as an implementation of the postponed asynchronous result concept for the Intel TBB flow graph. Another generic heterogeneous concept was implemented in the streaming_node class, which is described below.

streaming_node main ideas & workflow

As we look at the usual asynchronous and/or heterogeneous usage model, we can find that the model usually includes the following steps:

  • Receive input data.
  • Select a device for the kernel execution later.
  • Send the kernel arguments to the device.
  • Enqueue the kernel for execution on the device.
  • Get future result handlers from the device and store them somehow.
  • Send a future result object (async_msgs in fact) to the next graph node.

The workflow looks quite generic and independent of the particular device API. In Intel® TBB, the schema was implemented in the streaming_node class. However, the schema is quite abstract, so to make it workable we need to select a particular device API. In Intel® TBB, we refer to device APIs as Factories. We tried to make the Factory concept as simple as possible.

Let us look at the steps above from a responsibility areas point of view. This means that some steps can be implemented by streaming_node itself, some through a user-defined functionality, and some through the Factory concept (an abstraction of a device API)

  • Receive input data.

 Responsibility of streaming_node

  • Select a device for the kernel execution later.

End-user’s responsibility (implemented via a special user functor)

  • Send the kernel arguments to the device.

streaming_node calls Factory::send_data and gets dependency handlers back

  • Enqueue the kernel for execution on the device + Get future result handlers from the device and store them somehow.

streaming_node calls Factory::send_kernel and gets dependency handlers back for the future result

  • Send a future result object to the next graph node.

streaming_node creates a async_msg object with saved dependency handlers in it

The main streaming_node workflow becomes clear from the text above.

Please note that dependency handlers are device API-specific, so only the Factory can know the particular dependency type. In the current implementation, async_msg class cannot store any additional dependencies, so the Factory must provide a dependency_msg class derived from async_msg. As a result, an additional requirement for the Factory concept is that it provides the Factory::async_msg_type type. In addition, the main Factory interfaces must be able to get and update (to store dependencies) Factory::async_msg_type objects:

Factory::send_data  (device_type device, Factory::async_msg_type& dependencies[ ])Factory::send_kernel  (device_type device, kernel_type kernel, Factory::async_msg_type& dependencies[ ])

Hello, World!

Let us try to implement asynchronous “Hello World” printing with the streaming_node.

We will use a C++ thread in place of a programmable device.

The following classes and functions are needed to implement it:

  1. A special, taliored for this case asynchronous message (derived from async_msg)
  2. A thread with parallel printing in it (our “device”).
  3. A Factory that can work with the “device”.
  4. A simple device_selector.
  5. A main() function with 2 nodes.

Let us implement the components one by one:

hello_world.cpp:  part 1: user_async_msg class
#include <iostream>
#include <thread>
#include <mutex>
#include <cassert>
#include <tuple>

#define TBB_PREVIEW_FLOW_GRAPH_NODES 1
#define TBB_PREVIEW_FLOW_GRAPH_FEATURES 1

#include "tbb/tbb_config.h"
#include "tbb/concurrent_queue.h"
#include "tbb/flow_graph.h"

template<typename T>
class user_async_msg : public tbb::flow::async_msg<T>
{
public:
    typedef tbb::flow::async_msg<T> base;
    user_async_msg() : base() {}
    user_async_msg(const T& input) : base(), mInputData(input) {}
    const T& getInput() const { return mInputData; }

private:
    T mInputData;
};        

 

In the listing there are a few standard includes as well as several Intel TBB flow graph includes and definitions that enable async_msg and streaming_node classes in the Intel TBB headers.

 

The user_async_msg class is quite trivial: it just adds the mInputData field to store the original input value for processing in the asynchronous thread.

hello_world.cpp:  part 2: user_async_activity class
class user_async_activity { // Async activity singleton
public:
    static user_async_activity* instance() {
        if (s_Activity == NULL) {
            s_Activity = new user_async_activity();
        }
        return s_Activity;
    }

    static void destroy() {
        assert(s_Activity != NULL && "destroyed twice");
        s_Activity->myQueue.push(my_task()); // Finishing queue
        s_Activity->myThread.join();
        delete s_Activity;
        s_Activity = NULL;
    }

    void addWork(const user_async_msg<std::string>& msg) {
        myQueue.push(my_task(msg));
    }

private:
    struct my_task {
        my_task(bool finish = true)
            : myFinishFlag(finish) {}

        my_task(const user_async_msg<std::string>& msg)
            : myMsg(msg), myFinishFlag(false) {}

        user_async_msg<std::string> myMsg;
        bool                        myFinishFlag;
    };

    static void threadFunc(user_async_activity* activity) {
        my_task work;
        for(;;) {
            activity->myQueue.pop(work);
            if (work.myFinishFlag)
                break;
            else {
                std::cout << work.myMsg.getInput() << '';
                work.myMsg.set("printed: " + work.myMsg.getInput());
            }
        }
    }

    user_async_activity() : myThread(&user_async_activity::threadFunc, this) {}
private:
    tbb::concurrent_bounded_queue<my_task>  myQueue;
    std::thread                             myThread;
    static user_async_activity*             s_Activity;
};

user_async_activity* user_async_activity::s_Activity = NULL;

The user_async_activity class is a typical singleton with two common static interfaces: instance() and destroy().

The class wraps a standard thread (we used the std::thread class), which processes tasks from a task queue (implemented via the tbb::concurrent_bounded_queue  class).

Any thread can add a new task to the queue via the addWork() method. While the worker thread is processing the tasks one by one. For every incoming task, it just prints the original input string to the console and uses the async_msg::set interface to return the result back to the graph. The following pseudocode shows the format of the result: Result = ‘printed: ’ | original string, where “|” represents string concatenation.

hello_world.cpp:  part 3: device_factory class
class device_factory {
public:
    typedef int device_type;
    typedef int kernel_type;

    template<typename T> using async_msg_type = user_async_msg<T>;
    template <typename ...Args>
    void send_data(device_type /*device*/, Args&... /*args*/) {}

    template <typename ...Args>
    void send_kernel(device_type /*device*/, const kernel_type& /*kernel*/, Args&... args) {
        process_arg_list(args...);
    }

    template <typename FinalizeFn, typename ...Args>
    void finalize(device_type /*device*/, FinalizeFn /*fn*/, Args&... /*args*/) {}

private:
    template <typename T, typename ...Rest>
    void process_arg_list(T& arg, Rest&... args) {
        process_one_arg(arg);
        process_arg_list(args...);
    }

    void process_arg_list() {}

    // Retrieve values from async_msg objects

    template <typename T>
    void process_one_arg(async_msg_type<T>& msg) {
        user_async_activity::instance()->addWork(msg);
    }

    template <typename ...Args>
    void process_one_arg(Args&... /*args*/) {}
};

In this example, the implementation of an asynchronous device factory is simple; in fact, it implements only one real factory method:  send_kernel. The method gets incoming async messages as a C++ variadic template. As a result, in the implementation we just need to get all messages from the list and put them into the addWork() interface of our asynchronous activity.

Moreover, the Factory provides the correct async_msg_type for streaming_node, trivial (unused here) types for the device and the kernel, and empty implementations for the expected (but unused here) methods send_data and finalize. In your implementation, you can implement send_data to upload data to the device before the kernel run. Additionally, if the next node in the graph can reject incoming messages from streaming_node, the Factory must implement the finalize() method that calls the provided finalization functor by a finish callback from the device.

With all of the above in mind, the Factory concept can be implemented in several dozens of code lines in simple cases.

hello_world.cpp:  part 4: device_selector class
template<typename Factory>
class device_selector {
public:
    typename Factory::device_type operator()(Factory&) { return 0; }
};

In this simple example we have just one device, so the device selector functor is trivial.

hello_world.cpp:  part 5: main()
<font color="#000000" face="Times New Roman" size="3"> </font>int main() {
    using namespace tbb::flow;
    typedef streaming_node< tuple<std::string>, queueing, device_factory > streaming_node_type;

    graph g;
    device_factory factory;
    device_selector<device_factory> device_selector;
    streaming_node_type node(g, 0 /*kernel*/, device_selector, factory);
    std::string final;
    std::mutex final_mutex;

    function_node< std::string > destination(g, unlimited, [&g, &final, &final_mutex](const std::string& result) {
        std::lock_guard<std::mutex> lock(final_mutex);
        final += result + "; "; // Parallel access
        g.decrement_wait_count();
    });

    make_edge(output_port<0>(node), destination);
    g.increment_wait_count(); // Wait for result processing in 'destination' node
    input_port<0>(node).try_put("hello");
    g.increment_wait_count(); // Wait for result processing in 'destination' node
    input_port<0>(node).try_put("world");

    g.wait_for_all();
    user_async_activity::destroy();

    std::cout << std::endl << "done"<< std::endl << final << std::endl;
    return 0;
}

In the main() function we create all the required components: a graph object, a factory object, a device selector, and 2 nodes: one streaming_node and one destination function_node, which processes asynchronous results. make_edge() is used to connect these 2 nodes together. By default, the flow graph knows nothing about our async activity and it will not wait for the results. That is why manual synchronization (via increment_wait_count() / decrement_wait_count())was implemented. After the end the execution of the graph, the worker thread can be stopped, and the final log string is printed.

The application output:

$ g++ -std=c++11 -I$TBB_INCLUDE -L$TBB_LIB -ltbb -o hello ./hello_world.cpp
$ ./hello
hello world
done
printed: hello; printed: world;

Note: the code needs C++11 support, so the key -std=c++0x must be used for compilation.

Conclusion

The article demonstrates how to implement a simple Factory that works with streaming_node– a new flow graph node in the Intel TBB library. The detailed description of streaming_node can be found in the Intel TBB documentation (see Intel® Threading Building Blocks Developer Reference -> Appendices -> Preview Features -> Flow Graph -> streaming_node Template Class).

Note that this functionality is provided for preview and is subject to change, including incompatible modifications in the API and behavior.

 

If you have any remarks and suggestions about the article, feel free to leave comments. 

 

Intel® Software Guard Extensions Tutorial Series: Part 7, Refining the Enclave

$
0
0

Part 7 of the Intel® Software Guard Extensions (Intel® SGX) tutorial series revisits the enclave interface and adds a small refinement to make it simpler and more efficient. We’ll discuss how the proxy functions marshal data between unprotected memory space and the enclave, and we’ll also discuss one of the advanced features of the Enclave Definition Language (EDL) syntax.

You can find a list of all of the published tutorials in the article Introducing the Intel® Software Guard Extensions Tutorial Series.

Source code is provided with this installment of the series. With this release we have migrated the application to the 1.7 release of the Intel SGX SDK and also moved our development environment to Microsoft Visual Studio* Professional 2015.

The Proxy Functions

When building an enclave using the Intel SGX SDK you define the interface to the enclave in the EDL. The EDL specifies which functions are ECALLs (“enclave calls,” the functions that enter the enclave) and which ones are OCALLs (“outside calls,” the calls to untrusted functions from within the enclave).

When the project is built, the Edger8r tool that is included with the Intel SGX SDK parses the EDL file and generates a series of proxy functions. These proxy functions are essentially wrappers around the real functions that are prototyped in the EDL. Each ECALL and OCALL gets a pair of proxy functions: a trusted half and an untrusted half. The trusted functions go into EnclaveProject_t.h and EnclaveProjct_t.c and are included in the Autogenerated Files folder of your enclave project. The untrusted proxies go into EnclaveProject_u.h and EnclaveProject_u.c and are placed in the Autogenerated Files folder of the project that will be interfacing with your enclave.

Your program does not call the ECALL and OCALL functions directly; it calls the proxy functions. When you make an ECALL, you call the untrusted proxy function for the ECALL, which in turn calls the trusted proxy function inside the enclave. That proxy then calls the “real” ECALL and the return value propagates back to the untrusted function. This sequence is shown in Figure 1. When you make an OCALL, the sequence is reversed: you call the trusted proxy function for the OCALL, which calls an untrusted proxy function outside the enclave that, in turn, invokes the “real” OCALL.


Figure 1. Proxy functions for an ECALL.

The proxy functions are responsible for:

  • Marshaling data into and out of the enclave
  • Placing the return value of the real ECALL or OCALL in an address referenced by a pointer parameter
  • Returning the success or failure of the ECALL or OCALL itself as an sgx_status_t value

Note that this means that each ECALL or OCALL has potentially two return values. There’s the success of the ECALL or OCALL itself, meaning, were we able to successfully enter or exit the enclave, and then the return value of the function being called in the ECALL or OCALL.

The EDL syntax for the ECALL functions ve_lock() and ve_unlock() in our Tutorial Password Manager’s enclave is shown below:

enclave {
   trusted {
      public void ve_lock ();
      public int ve_unlock ([in, string] char *password);
    }
}

And here are the untrusted proxy function prototypes that are generated by the Edger8r tool:

sgx_status_t ve_lock(sgx_enclave_id_t eid);
sgx_status_t ve_unlock(sgx_enclave_id_t eid, int* retval, char* password);

Note the additional arguments that have been added to the parameter list for each function and that the functions now return a type of sgx_status_t.

Both proxy functions need the enclave identifier, which is passed in the first parameter, eid. The ve_lock() function has no parameters and does not return a value so no further changes are necessary. The ve_unlock() function, however, does both. The second argument to the proxy function is a pointer to an address that will store the return value from the real ve_unlock() function in the enclave, in this case a return value of type int. The actual function parameter, char *password, is included after that.

Data Marshaling

The untrusted portion of an application does not have access to enclave memory. It cannot read from or write to these protected memory pages. This presents some difficulties when the function parameters include pointers. OCALLs are especially problematic, because a memory allocated inside the enclave is not accessible to the OCALL, but even ECALLs can have issues. Enclave memory is mapped into the application’s memory space, so enclave pages can be adjacent to unprotected memory pages. If you pass a pointer to untrusted memory into an enclave, and then fail to do appropriate bounds checking in your enclave, you may inadvertently cross the enclave boundary when reading or writing to that memory in your ECALL.

The Intel SGX SDK’s solution to this problem is to copy the contents of data buffers into and out of enclaves, and have the ECALLs and OCALLs operate on these copies of the original memory buffer. When you pass a pointer into an enclave, you specify in the EDL whether the buffer referenced by the pointer is being pass into the call, out of the call, or in both directions, and then you specify the size of the buffer. The proxy functions generated by the Edger8r tool use this information to check that the address range does not cross the enclave boundary, copy the data into or out of the enclave as indicated, and then substitute a pointer to the copy of the buffer in place of the original pointer.

This is the slow-and-safe approach to marshaling data and pointers between unprotected memory and enclave memory. However, this approach has drawbacks that may make it undesirable in some cases:

  • It’s slow, since each memory buffer is checked and copied.
  • It requires additional heap space in your enclave to store the copies of the data buffers.
  • The EDL syntax is a little verbose.

There are also cases where you just need to pass a raw pointer into an ECALL and out to an OCALL without it ever being used inside the enclave, such as when passing a function pointer for a callback function straight through to an OCALL. In this case, there is no data buffer per se, just the pointer address itself, and the marshaling functions generated by Edger8r actually get in the way.

The Solution: user_check

Fortunately, the EDL language does support passing a raw pointer address into an ECALL or an OCALL, skipping both the boundary checks and the data buffer copy. The user_check parameter tells the Edger8r tool to pass a pointer as it is and assume that the developer has done the proper bounds checking on the address. When you specify user_check you are essentially trading safety for performance.

A pointer marked with the user_check does not have a direction (in or out) associated with it, because there is no buffer copy taking place. Mixing user_check with in or out will result in an error at compile time. Similarly, you don’t supply a count or size parameter, either.

In the Tutorial Password Manager, the most appropriate place to use the user_check parameter is in the ECALLs that load and store the encrypted password vault. While our design constraints put a practical limit on the size of the vault itself, generally speaking these sorts of bulk reads and writes benefit from allowing the enclave to directly operate on untrusted memory.

The original EDL for ve_load_vault() and ve_get_vault() looks like this:

public int ve_load_vault ([in, count=len] unsigned char *edata, uint32_t len);

public int ve_get_vault ([out, count=len] unsigned char *edata, uint32_t len);

Rewriting these to specify user_check results in the following:

public int ve_load_vault ([user_check] unsigned char *edata);

public int ve_get_vault ([user_check] unsigned char *edata, uint32_t len);

Notice that we were able to drop the len parameter from ve_load_vault(). As you might recall from Part 4, the issue we had with this function was that although the length of the vault is stored as a variable in the enclave, the proxy functions don’t have access to it. In order for the ECALL’s proxy functions to copy the incoming data buffer, we had to supply the length in the EDL so that the Edger8r tool would know the size of the buffer. With the user_check option, there is no buffer copy operation, so this problem goes away. The enclave can read directly from untrusted memory, and it can use its internal variable to determine how many bytes to read.

However, we still send the length as a parameter to ve_get_vault(). This is a safety check to ensure that we don’t accidentally overflow a buffer when fetching the encrypted vault from the enclave.

Summary

The EDL provides three options for passing pointers into an ECALL or an OCALL: in, out, and user_check. These options are summarized in Table 1.

Specifier/DirectionECALLOCALL
inThe buffer is copied from the application into the enclave. Changes will only affect the buffer inside the enclave.The buffer is copied from the enclave to the application. Changes will only affect the buffer outside the enclave.
outA buffer will be allocated inside the enclave and initialized with zeros. It will be copied to the original buffer when the ECALL exits.A buffer will be allocated outside the enclave and initialized with zeros. This untrusted buffer will be copied to the original buffer in the enclave when the OCALL exits.
in, outData is copied back and forth.Data is copied back and forth.
user_checkThe pointer is not checked. The raw address is passed.The pointer is not checked. The raw address is passed.

Table 1. Pointer specifiers and their meanings in ECALLs and OCALLs.

If you use the direction indicators, the data buffer referenced by your pointer gets copied and you must supply a count so that the Edger8r can determine how many bytes are in the buffer. If you specify user_check, the raw pointer is passed to the ECALL or OCALL unaltered.

Sample Code

The code sample for this part of the series has been updated to build against the Intel SGX SDK version 1.7 using Microsoft Visual Studio 2015. It should still work with the Intel SGX SDK version 1.6 and Visual Studio 2013, but we encourage you to update to the newer release of the Intel SGX SDK.

Coming Up Next

In Part 8 of the series, we’ll add support for power events. Stay tuned!

Intel® Deep Learning SDK Tutorial: Getting Started with Intel® Deep Learning SDK Training Tool

$
0
0

Download PDF [PDF 1.39 MB]

Introduction

Release Notes

Please find Release Notes at https://software.intel.com/en-us/articles/deep-learning-sdk-release-notes

Installing Intel® Deep Learning SDK Training Tool

For installation steps please see the Intel® Deep Learning SDK Training Tool Installation Guide.

Introducing the Intel® Deep Learning SDK Training Tool

The Intel® Deep Learning SDK Training Tool is a feature of the Intel Deep Learning SDK, which is a free set of tools for data scientists, researchers and software developers to develop, train, and deploy deep learning solutions. The Deployment Tool is currently unavailable.

With the Intel Deep Learning SDK Training Tool, you can:

  • Easily prepare training data, design models, and train models with automated experiments and advanced visualizations
  • Simplify the installation and usage of popular deep learning frameworks optimized for Intel platforms

The Training Tool is a web application running on a Linux* server and provides a user-friendly, intuitive interface for building and training deep learning models.

When you start the Training Tool and login, you are presented with a workspace displaying the home page and a series of the main tabs in the blue panel on the left side. These tabs provide access to a set of features that enable you to upload source images, create training datasets, and build and train your deep learning models.

Uploads Tab

Before you build and train your model, use this tab to upload an archive of images that will form a dataset for training the model. For details see the Uploading Images topic.

Datasets Tab

Create datasets from previously uploaded images using the Datasets panel. A dataset is not just a bunch of images, but it is a database of a specific format that holds all images arranged in categories.

While creating a dataset you can change image color mode (i.e. grayscale or RGB color), encoding format, or multiply the initial image set by data augmentation, i.e. applying different transformations to original images to create modified versions. All changes are stored in the database and do not physically affect the original files.

You can use the entire image set for training and validation (by assigning a percentage for each subset) or use separate folders for each procedure.

For more information see the Creating a Training Dataset section.

Models Tab

Use this tab to create and train your model with an existing dataset. There are three available pre-defined topologies you can use for your model:

While configuring the model, you can apply transformations to images from the selected dataset without changing the original image files.

While each of pre-defined models already have a set of default parameters values that are optimal for general use, the Models tab enables you to efficiently configure the learning process for specific use cases.

For more information see the Creating a Model section.

Uploading Images

Before creating training datasets, you need to upload images you intend to use for training your model.

Use the Uploads tab to upload input images as a RAR archive with the strictly predefined structure.

All images inside the uploaded archive must be divided into separate subfolders which are named after desired training labels/categories. For example, the structure of a sample archive for 0-9 digits, which could be used for training a LeNet model, may look like the following:

digits.rar/

    0/

        0_01.png

        0_02.png

         …

    1/

        1_01.png

        1_02.png

         …

       …



    9/

        9_01.png

        9_02.png

         …

Choose the archive located on your computer or on the web and specify the root directory that will hold all extracted images.

The directory path is relative to the Docker installation directory you specified while installing the Training Tool. For the installation steps, see Installing Intel® Deep Learning SDK Training Tool.

The table under the Upload button provides the information about current uploads and upload history:

Creating a Training Dataset

You can easily create training datasets using the Datasets tab. Once you click the tab, the panel comes up with the New Dataset icon and a list of previously saved datasets:

You can look up saved datasets by searching them via the name, edit, rename, delete them or complete their generation process. For more information see Saving, Editing and Reviewing a Dataset.

To start creating a dataset, click New Dataset that launches the wizard. A wizard screen contains the following elements:

  1. Dataset Name field – Sets the name of the dataset
  2. Dataset Description field – Sets the description for the dataset
  3. Dataset Manage panel – Enables saving, running or deleting the current dataset at any step
  4. Navigation panel – indicates the current step and switches between dataset creation steps.

The wizard divides the workflow of creating training image dataset into three separate steps indicated as tabs on the navigation bar in the wizard screen:

  1. Define the folder that contains source images, number of files to use for training and validation and other settings in the Data folder tab.
  2. Make preprocessing settings in the Image Preprocessing tab for input images.
  3. Choose the image database options in the Database option tab.

Whenever you need to modify the settings you can switch over the steps using the Next and Back buttons or by clicking a tab on the navigation bar directly.

To abort creating the dataset, click the Delete icon in the toolbar in the upper right corner.

Adding Source Images to a Dataset

Start creating a dataset with setting its name and the source folder.

Set the dataset name using the Dataset Name field to identify the dataset in the dataset collection. Using meaningful names can help you find the dataset in the list when you are creating a model.

You can add annotations for the dataset if needed using the Description field.

Use the Source folder field to specify the path to the root folder that holds contents of the extracted RAR archive that you previously uploaded to the system. If you have not completed this step, see Uploading Images to learn about image archives used for datasets.

From the entire set of training images you can define the image groups for each phase of the model training process:

  • Training – trains the model using a set of image samples.
  • Validation – could be used for model selection and hyper-parameter fine-tuning.

To define the validation subset, choose a percentage of images for validation in the Validation percentage field. The default value is 10%.

Alternatively, you can also use separate folder for validation. You can specify this folder once you select the Use other folder option.

NOTE: If you are using other folder for validation, the respective percentage field resets to zero value.

Data augmentation

You can extend your dataset using the Training Tool augmentation feature. It enables you to enlarge the set by creating copies of existing images and applying a number of transformations such as rotating, shifting, zooming and reflecting.

You can simply specify the maximum number of transformations to be applied to each image in the dataset using the Max number of transformations per image field.

Alternatively you can use the Advanced section to additionally define which types of transformations to apply, transformation parameters and weights. Weight here is the percentage of the selected augmentation type in the total number of performed augmentations, in percent. The higher the specified weight, the more augmentations of the selected type are performed. The total weights of all selected augmentation types must be 100%.

Sometimes transformations result in exposing undefined parts of the image. For example, after zooming out, an image might have blank areas in the border area. Choose a method to fill those blank areas in augmented images using the Fill Method section:

  • Constant - Fills the missing pixels with a certain hexadecimal color code value in RGB format
  • Nearest - Fills the missing pixels with the values of neighboring pixels
  • Wrap - Fills the missing pixels by tiling the image
  • Reflect– Fills the missing pixels by reflecting a region of the image.

Preprocessing Input Image Data

You can pre-process images included in a dataset using the Image Processing tab.

Selecting Image Chroma type

The tool enables you to use color or grayscale modes for images. Choose the desired option in the Type group.

  • If you select the grayscale option in creating a dataset with color images in RGB format, the tool automatically perform the pixel wise RGB-to-grayscale conversion according to the formula:
    Y = 0.299*Red + 0.587*Green + 0.114*Blue,
    where Y is the intensity/grayscale value.
  • If you use the Color option for grayscale images, the algorithm uses the intensity value as the values of red, green, blue channels: R = Y, G = Y, B = Y.

Resizing Images

By default, the resize dimensions are set to 28x28 pixels but you can resize the images to arbitrary sixe with one of the available resize options:

  • Squash
  • Crop
  • Fill
  • Half crop, half fill.

The table below demonstrates the results of resizing an example image of original size 128x174 pixels, to a 100x100 pixels square image using each resizing method.

Original image, 128x174 pixels

Squash transforms original image by upsampling or downsampling pixels using bi-cubic interpolation to fill new width and length without keeping the aspect ratio.

Crop option resizes the image while maintaining the aspect ratio. The image is first resized such that the smaller image dimension fits the corresponding target dimension. Then the larger dimension is cropped equal amounts from either sides to fit the corresponding target.

Fill option resizes the image while maintaining the aspect ratio. The image is first resized such that the larger image dimension fits the corresponding target dimension. Then the resultant image is centered in the smaller dimension and white noise strips of equal width are inserted to both sides to make that dimension equal to the target.  

Half crop, half fill option resizes the image while maintaining the aspect ratio. The image is first resized using the fill option halfway and then the Crop option is applied. For a transformation with the dimensions of the original image and the target image w (width), h (height) and W,H respectively, it means that the original image is resized to the dimensions W+(w-W)/2, H+(h-H)/2 with the Fill transformation first and then the resulting image is resized to the target dimensions W, H using the Crop transformation.

Setting Database options

The Training Tool stores all input images in a database file. At this step you can set desired database options such as database type and image encoding.

You can choose a certain type of the database using the DB backend drop-down list:

  • LMDB
  • LevelDB

To find more information on both types, please see LMDB and LevelDB home pages.

Choose an image encoding format from the Image Encoding drop-down list if needed. Using PNG or JPEG formats can save disk space, but may increase the training time.

Generating a Dataset

After you complete configuring the dataset, you can launch the dataset generation process by clicking the Run icon in the dataset manage panel . The Training Tool starts the process and shows the progress bar and the details of the dataset generation process:

Once the generating completes the dataset changes its status in the right tab list to Complete.

For more information about saving datasets and dataset statuses see Saving, Editing and Reviewing a Dataset

Saving, Editing and Reviewing a Dataset

When you click the Dataset tab, the Dataset panel comes up with the New Dataset icon and the list of previously saved datasets. Datasets in the list can be in one of three states: Draft, Ready or Completed.

You can save a dataset as a Draft at any moment before all of its mandatory fields are set. The Ready status indicates that you have set all mandatory fields for the saved dataset and it is ready to be generated. The Completed status identifies already generated datasets.

To find a dataset by its unique name, use the Search field.

You can rename, edit or delete a dataset in the Draft state using the toolbar in the right upper corner .

For a dataset in the Ready state, the Run operation is additionally available: .

To view or edit datasets in Draft or Ready state, select it from the list or click the Edit icon in the toolbar.

To view the details of a Completed dataset, select it from the list.

Creating a Model

After images were uploaded and a dataset was generated using those images, you are ready to create and train a deep learning model. To begin the model creation process, choose the Models tab in the vertical blue panel. The panel comes up with the New Model icon and the list of previously trained and drafted models displayed under the Search text field.

You can look up existing models by searching them by the given model name and rename, edit, build or delete them. For more information see Saving, Editing and Reviewing a Model.

To create a new model, use the New Model icon. Once you click it, you are presented with a wizard screen.

Wizard screen contains the following elements:

  1. Model Name field – A mandatory field that sets the name of the model
  2. Model description field – Adds an optional description about the model
  3. Model manage panel – Enables saving, running training or deleting the current model at any step
  4. Navigation panel – indicates the current step and switches between model creation steps

The new model creation process consists of four stages as illustrated in the navigation pane.

  1. Select a dataset from the list of generated datasets in the Dataset Selection tab.
  2. Choose and tune a pre-defined model topology in the Topology tab.
  3. Transform images from the dataset if needed in the Data Transformation tab.
  4. Configure default parameters to tune the learning process in the Parameters tab.

Assigning a Dataset

The first stage of creating a model is the dataset selection stage.

As the first step in this stage, enter a unique name for the model in the Model Name text field. Using meaningful names can help you find the model in the model list.

In order to quickly recognize the model among other existing models in the future, the Training Tool provides an option of entering a descriptive text about the model in the Description field.

Every model should be linked to a particular dataset at any given time. The next step is choosing a dataset that provides the model with training and validation images. Select an existing dataset from the listed datasets or search for one by the name and select it. Press Next to move on to the second stage of the process.

Configuring Model Topology

In the second stage you need to configure the topology of the model.

First step is to select a specific model topology from the pre-loaded three topologies listed in the Topology name list. These pre-loaded topologies come configured with the optimal training/validation settings for that specific topology under general use conditions. However you can customize it to match specific requirements via checking the Fine tine topology check-box. There are two levels of fine tuning available - light and medium - you may pick one as desired. Configuration options in the following stages will change upon selecting the fine tuning option and the level fine tuning chosen. Back/Next buttons in the bottom blue pane allows you to move between the four stages as needed.

Transforming Input Images

The third stage allows you to add pre-processing to the images in before they are fed to the model for training or validation.

You may add three optional pre-processing operations to the training data. Two of them, cropping and horizontal mirroring, add some degree of randomness to the training process, by applying those operations to randomly chosen training images. In image classification tasks with large datasets, these types of random pre-processing are used to enhance the performance of the learned model by making it robust to deviations of input images which may  not be covered in the training set.

The mean subtracting, if selected will be done for each and every image, and there are two options: subtract the mean image or pixel.

Configuring Training Parameters

In the fourth and final stage, training parameters (i.e. hyper-parameters) are configured to tune the training process. Pre-loaded models in the Training Tool come with a set of default values for each of the parameter fields. These values are the optimal parameter values for the given module in its general use case.

Typical training of a deep learning module involves hundreds of thousands of parameters (aka weights), hence a module is trained over-and-over with a given training set. One complete pass of the total training dataset is called an epoch. At the end of one epoch every image in the training dataset has passed exactly once through the module. You can adjust the number of epochs using the Training epochs field. This number depends on the module topology, parameter estimation algorithm (solver type), initial learning rate and the learning rate decay curve, required final accuracy, and the size of the training dataset.

Within an epoch, images in the training dataset are partitioned in to batches and the module is trained with one batch at a time. Once a batch of images passe through the module, the parameters of the module are updated, and then a next batch is used. One such pass is called an iteration. In general, a larger batch size reduces the variance in the parameter update process and may leads to a faster conversion. However, larger the batch size higher the memory usage will be during the training.

By specifying the Validation interval value, you can define how often validations should take place in  number of epochs. For example setting the value to 1 will lead to validations taking place at the end of each epoch. Use the Validation batch size value to define the size of the batch of validation images .

Training a deep learning module is a lengthy and complex task, therefore the tool  regularly takes snapshots to backup the status of the module being trained and the status of the solver. To set the frequency of backups, use the Snapshot intervals field.

Parameter (or weight) estimation is not only about optimizing the loss/error function for the training datasetas the estimated weights should be able to generalize the model to new unseen data. Using the Weight decay setting, you can adjust the regularization term of the model to avoid overfitting.

Learning rate determines the degree an update step influences the current values of the weights of the module. Larger learning rates will cause drastic changes at updates and could lead to either oscillations around the minima or missing the minima all together, while unreasonably smaller rate would lead to very slow convergence. The Base learning rate is the initial learning rate at the start of the learning process.

Momentum captures the direction of the last weiht update and helps to reduce oscilations and the possibility of getting stuck in a local minima. Momentum ranges from 0 to 1 and typically higher value such as 0.9 is used. However, it is important to use a lower learning rate when using a higher momentum to avoid drastic weight updates.

You may choose a solver type from a list of available types, the default one is the stochastic gradient descent.

Use Advanced learning rate options to further specify how the learning rate changes during the training.

There are several learning rate update policies (or curves) to choose from.

Step size determines how often the learning rate should be adjusted (in number of iterations).

Gamma controls the amount of change in the learning rate (determines the learning rate function shape) at every adjustment step.

By checking the Visualize LR box you can visualize the learning rate change as a curve.

Running the Training Process

After you complete configuring the model, you can launch the model training process by clicking the Run icon in the model manage panel . The Training Tool starts the process and shows the progress bar and the status of the model as it is being trained:

Once the training completes the model changes its status in the list to Trainingcompleted.

For more information about saving models and model statuses see Saving, Editing and Reviewing a Model.

Saving, Editing and Reviewing a Model

When you click the Dataset icon, the Dataset panel comes up with the New Dataset icon and the list of previously saved datasets. A dataset in the list can be in one of three states: Draft, Ready or TrainingCompleted.

You can only save the model as a Draft at any moment prior to setting all mandatory fields. The Ready status indicates that you have set all mandatory fields for the saved model and it is ready to be trained. The Train Completed status is achieved when the model has completed training with the associated dataset.

To find a model by the given unique name, use the Search field.

You can Rename, Edit or Delete a model in the Draft state using the toolbar in the right upper corner for that model.

For a model which is in the Ready state the Run operation is additionally available: .

To view or edit a model in either Draft or Ready state, select it from the list or click the Edit icon in the toolbar.

For a model in the TrainingCompleted state, Rename, Duplicate and Delete operations are available: .

To view the details of a completed model, select it from the list.

Additional Resources

To ask questions and share information with other users of the Intel® Deep Learning SDK Training Tool, visit Intel® Deep Learning SDK forum.

What's New? Intel® Threading Building Blocks 2017 Update 3

$
0
0

Changes (w.r.t. Intel TBB 2017 Update 2):

- Added support for Android* 7.0 and Android* NDK r13, r13b.

Preview Features:

- Added template class gfx_factory to the flow graph API. It implements
    the Factory concept for streaming_node to offload computations to
    Intel(R) processor graphics.

Bugs fixed:

- Fixed a possible deadlock caused by missed wakeup signals in
    task_arena::execute().

Heterogeneous TBB (flow graph promotion): 

TBB flow graph: using streaming_node

Unreal Engine* 4: Setting Up Destructive Meshes

$
0
0

Download the Document [PDF 436 KB]

Download the Code Sample

Contents

Destructive Meshes

The following is a quick guide on getting a PhysX* Destructible Mesh (DM) working setup in an Unreal Engine* 4 (UE4*) project.

This guide is primarily based on personal trial and error; other methods may exist that work better for your project. See official documentation for tutorials on fracturing and troubleshooting if you would like to go more in depth with Destructive Mesh capabilities.

PhysX* Lab

To get started, download and install PhysX Lab. Version 1.3.2 was used for this paper.

FBX* Files

Whether made in Blender*, Maya*, or other modeling software, set the modeling units to meters, or scale up the model so it is the correct size before exporting. If the model comes into UE4 too small, it will need to scale up in the project, which can lead to errors in mesh collision. In general, avoid changing the scale of a DM in UE4, but if needed, scaling down works better than scaling up.

Fracturing

For the purposes of this paper, once the FBX* file is imported into the lab, go ahead and click the Fracture! button in the bottom right-hand corner. To learn more about this feature, see the tutorials on fracturing.

You can go back and play with the fracture settings after getting the more important parts set up, so don’t feel like you have to get the perfect fracture just yet.

Graphics

For a DM to have two different textures (outside and inside), follow these steps in the Control Panel (Figure 1):

  1. Under the Graphics tab, and in the Material Library tab, find the green/white lambert texture.
  2. Right-click the lambert and load the texture as a material.
  3. Select a BMP or Targa file for the mesh.
  4. Select the new texture in the Material Library tab, then under the Mesh Materials tab, click the Apply (black) arrow.
  5. Now, under the Select Interior Material tab select the lambert and then click the Set Interior Material of Selected button. (You may see this result after applying the mesh material; this is recommend to make sure it takes on export.)
  6. Set the U Scale and V Scale to 100.

Figure 1.Graphics tab in the Control Panel.

Settings

Now, for the DM, it’s time to play with some settings (Figure 2). As with the textures, these settings can be played with after the DM has been imported into UE4. It was found that turning settings on in the lab increases the chances of them working as intended when exported. These settings are:

  • Debris Depth
  • Use Lifetime Range
  • Use Debris Depth
  • Destruction Probability (This cannot be changed in UE; chance of chunk being destroyed when hit)
  • Support Depth
  • Asset Defined Support
  • World Overlap

Figure 2.Assets tab in the Control Panel.

Once finished with the settings and the fracture has been set, use Export Asset to export and move the DM to UE4.

Unreal Engine

Bringing a DM into UE4 is as easy as any other asset; use the Import button in the Content Browser. If the FBX file was set up correctly, the DM can be dragged into the scene.

DM Physics

Depending on the mechanics of your game, a few physics settings should be considered (Figure 3):

  • Simulate Physics
    • A False setting is for things like walls and stationary objects.
    • A True setting will cause the mesh to fall (unless gravity is off).
  • Enable Gravity
    • When Simulate Physics is False (the default setting for most DMs), True makes it so a DM doesn’t fall to gravity, but broken chunks will still be affected by gravity.
    • A False setting will cause the DM and its chunks to float around in space.
  • Use Async Scene
    • If True, the DM will not collide with any other physics actor.
    • If False, the DM can collide with other physics actors.

Figure 3.Physics panel.

If the DM is intended to break by falling to the ground or being run into, check the Enable Impact Damage option window (Figure 4). Changing the Impact Resistance changes the amount of force pushed back into the actor that the DM collides with.

Figure 4.Destructible Setting tab.

Unreal Engine* 4: Blueprint CPU Optimizations for Cloth Simulations

$
0
0

Download [PDF 838 KB]

Download the Code Sample

Content

Cloth Simulations

Realistic cloth movement can bring a great amount of visual immersion into a game. Using PhysX* Clothing* is one way to do this without the need of hand animating. Incorporating these simulations into Unreal Engine* 4 is easy, but as it is a taxing process on the CPU, it’s good to understand their performance characteristics and how to optimize them.

Disabling Cloth Simulation

Cloth simulations in Unreal are in the level they will be simulated, whether they can be seen or not. Optimization can prevent this risk. Do not rely on the Disable Cloth setting for optimizing simulated cloth, as this only works in the construction, and has no effect while the game is in play.

Unreal Physics Stats

To get a better understanding of cloth simulation and its effect on a game and system, we can use a console command Stat PHYSICS in Unreal.

After entering Stat PHYSICS at the command line, the physics table overlay appears (Figure 1). To remove it, just enter the same command into the console.


Figure 1. Physics overlay table.

While there is a lot of information available, we need only worry about the first two (Cloth Total and Cloth Sim) for the purposes of this paper.

Cloth Total represents the total number of cloth draws within the scene, and Cloth Sim (simulation) represents the number of active cloth meshes currently simulated. Keeping these two numbers within a reasonable level to your target platform helps prevent a loss of frame rate due to the CPU being loaded down with processing cloth. By adding an increasing number of cloth meshes to the level, the number of simulations the CPU can handle at once becomes apparent.

Level of Detail

When creating a skeletal mesh and attaching an apex cloth file to it, that cloth simulation will always be tied to the zero value of the Level of Detail (LOD) of that mesh. If the mesh is ever switched off of LOD 0, the cloth simulation will no longer take place. Using this to our advantage, we can create a LOD 1 that is the same in every way as our LOD 0 (minus the cloth apex file), and use it as a switch for whether we want to use the cloth simulation (Figure 2).


Figure 2. Level of Detail information.

Boolean Switch

Now that we have a switch, we can setup a simple blueprint to control it. By creating an event (or function), we can branch using a Boolean switch between simulating the cloth (LOD 0) and not simulating the cloth (LOD 1). This event could be called on a trigger entered to begin simulating the cloth meshes in the next area, and again when the player leaves that area to stop those simulations, or any number of methods, depending on the game level.


Figure 3. Switch blueprint.

Occlusion Culling Switch

If a more automated approach is desired, Occlusion Culling can be used as the switching variable. To do this, call the “Was Recently Rendered” function, and attach its return to the switch branch (Figure 4). This will stop the cloth simulation when the actor is no longer rendered.


Figure 4. The ”Was Recently Rendered” function in the switch blueprint.

The problem with this method comes from the simulation reset that occurs when the simulation is switched back on. If the cloth mesh is drastically different when it is simulated, the player will always see this transition. To mitigate the chance of this happening, the bounds of the mesh can be increased with import settings. However, this also means intentionally rendering objects that cannot be seen by the player, so make sure it is worthwhile in terms of the game’s rendering demands.

A level design approach to solving this issue would include making sure all dynamically capable cloth meshes (such as flags) are placed in the same direction as the wind.

It may be possible to program a method in C++ that will save the position data of every vertex of the cloth simulation and translate the mesh back into that position when the simulation in turned back on. That could be a very taxing method, depending on the data structure used and the amount of cloth simulations in the level.


Figure 5. Cloth Simulations without Occlusion Culling switch.


Figure 6. Cloth Simulations with Occlusion Culling switch.

Combination/Set Piece Switch

If the level happens to have a very dynamic set piece that is important enough to always look its best, an additional branch that uses a Boolean switch can be attached to the actor; in figure 6 we call it “Optimize Cloth?”.


Figure 7. Set Piece switch.

With this new switch, importance can be given to certain cloth meshes that should always be simulated by switching their “Optimize Cloth?” value to false.

Using a Set Piece Switch

In figure 8 below, three cloth meshes are flags that turn away and point backwards, relative to their starting position. It takes a few seconds for this to look natural, but because they really sell the fact that they are not hand animated, I set them to be Set Pieces (Optimize Cloth? false), so they are always being simulated.


Figure 8. Complex flags used with set piece switches.

Building an Arcade Cabinet with Skull Canyon

$
0
0

Hi I’m Bela Messex, one half of Buddy System, a bedroom studio based in Los Angeles, and makers of the game Little Bug.

Why an Arcade Cabinet?

My co-developer and I come from worlds where DIY wasn’t a marketable aesthetic, but a natural and necessary creative path. Before we met and found ourselves in video game design we made interactive sculpture, zines, and comics. We’ve been interested in ways to blend digital games with physical interaction, and while this can take many forms, a straightforward route was to house our debut game, Little Bug, in a custom arcade cabinet. As it turns out doing so was painless, fun, and easy; and at events like Fantastic Arcade and Indiecade, it provided a unique interaction that really drew attendees.

The Plan

To start off, I rendered a design in Unity complete with Image Effects, Animations and completely unrealistic lighting… If only real life were like video games, but at least I now had a direction.

The Components

This worked for us and could be a good starting point for you, but you might want to tailor a bit to your game’s unique needs.

  • Intel NUC Skull Canyon.
  • 2 arcade joysticks.
  • 3 arcade buttons.
  • 2 generic PC joystick boards with wires included.
  • 4’ x 8’ MDF panel.
  • 24” monitor.
  • 8” LED accent light.
  • Power Strip.
  • Power Drill.
  • Nail gun and wood glue.
  • Screws of varying sizes and springs.
  • 6” piano hinge.
  • Velcro strips.
  • Zip ties.
  • Black spray paint and multicolored paint markers.
  • Semi opaque plexi.

Building the Cabinet

When I was making sculptures, I mainly welded, so I asked my friend Paul for some help measuring and cutting the MDF panels. We did this by designing our shapes on the spot with a jigsaw, pencil, and basic drafting tools. Here is Paul in his warehouse studio with the soon to be cabinet.

We attached the cut pieces with glue and a nail gun, but you could use screws if you need a little more strength. Notice the hinge in the front - this was Paul’s idea and ended up being a life saver later on when I needed to install buttons and joysticks. Next to the paint can is a foot pedal we made specific for Little Bug’s unique controls: two joysticks and a button used simultaneously. On a gamepad this dual stick setup is no problem but translated to two full sized arcade joysticks both hands would be occupied, so how do you press that button? Solution: use your foot!

After painting the completed frame, it was time for the fun part - installing electronics. I used a cheap ($15) kit that include six buttons, a joystick, a USB controller board and all the wiring. After hundreds of plays, it’s all still working great. Notice the LED above the screen to light up the marquee for a classic arcade feel.

Once the NUC was installed in the back via velcro strips, I synced the buttons and joysticks inside the Unity inspector and created a new build specifically designed for the cabinet. Little Bug features hand drawn sprites, so we drew on all of the exterior designs with paint markers to keep that look coherent. The Marquee was made by stenciling painter’s tape with spray paint.

The Joy of Arcade

There is really nothing like watching players interact with a game you’ve made. Even though Little Bug itself is the same, the interaction is now fundamentally different, and as game designers it has been mesmerizing to watch people play it in this new way. The compact size and performance of the NUC was perfect for creating experiences like this, and it’s worked so well I’m already drawing up plans for more games in the same vein.


Thermal Management Overview for Intel® Joule™ Developer Kit

Intel Unite Use Case Guide for Audio/Video Conferencing Plugins

$
0
0

Intel Unite is to be the solution to simplify user connections within the conference room. As Intel Unite technology does not have integrated video capture or audio solutions, the expectation is for Cloud Audio/Visual Collaboration Companies to provide those services, both within the corporate firewall and without.

With Intel Unite, the collaboration room can now be made more user friendly, by making it a dongle free environment by the use of connecting to the in room screen over the corporate Wi-Fi network. The individual connections are made by users using the supplied 6 digit pin code that is displayed by the Intel Unite hub to the in room screen.

Audio/Video Conferencing Plugins

Where the Intel Unite plugin comes into play for a Cloud based Audio/Video conferencing solution is by providing a dongle free environment within the collaboration room, while offering their solution of providing screen sharing outside the conference room. This effectively makes the Hub a meeting participant, whom is able to share its own screen or view content shared to those screen external of the collaboration room. The plugin will allow the Intel Unite hub application to control the A/V collaboration tool features, such as session connect/disconnect, volume control and sharing controls.

Suggested Integration

Meeting Invite Handling

Depending on how collaboration software is setup will ultimately affect how this is to be handled. The actual connection to the meeting can be automated or manual.

The invite handling should be as simplified as possible. Such as simply sending a meeting notice to the conference room PC and allowing that PC to connect to the collaboration session as required. This can be app based method or email/calendar based such as the plugin for Skype* for Business uses.

Sharing content in Unite and AV conferencing session

Participants joining the Unite session with 6 digit pin will be seen as a single participant. If someone shares to the Unite Hub, the hub will share out to the AV solution.

AV Control Requirements for plugin

Plugin requirements will include (if applicable) controls for existing room devices, such as:

  • Room Camera control off/on
  • Room microphones mute
  • Room speakers mute
  • Session Disconnect
  • Auto disconnect from AV Session when last Unite User disconnects
  • Audio Controls +, -, mute
  • Sharing Controls
  • Any Optional AV Control that may be needed

Reactively join conference room to an ad-hoc AV conferencing session (reactive action)

  • Ability to "answer" in the room from a connected, Unite-enabled client Toast behavior
  • View toast of upcoming meetings scheduled for hub on room display
  • Indicate when someone has joined or left the AV conferencing session (not the Intel Unite Session)
Custom screen layouts for multi-screen rooms
  • Toggle different visual layouts on displays
  • Camera feed(s) on one display, content sharing on another
  • Different camera feeds on different displays, content sharing on another
  • Mirrored displays - camera feed(s) + content on each display
Support for hardware button behavior
  • Ability to work with speaker/mic/hub hardware on meeting room table - support 'Answer' and 'Hang up' button actions to Join and End scheduled meetings with a button press, where the room hub/PC is an invited resource.

Summary

Intel unite allows for a dongle free environment for collaboration room sharing, while the Cloud A/V collaboration tool handles the 

Improving the Performance of Principal Component Analysis with Intel® Data Analytics Acceleration Library

$
0
0

Have you ever tried to access a website and had to wait a long time before you could access it or not been able to access it at all? If so, that website might be falling victim to what is called a Denial of Service1 (DoS) attack. DoS attacks occur when an attacker floods a network with information like spam emails, causing the network to be so busy handling that information that it is unable to handle requests from other users.

To prevent spam email DoS attack a network needs to be able to identify “garbage”/spam emails and filter them out. One way to do this is to compare an email pattern with those in the library of email spam signatures. Incoming patterns that match those of the library are labeled as attacks. Since spam emails can come in many forms and shapes, there is no way to build a library that can store all the patterns. In order to increase the chance of identifying spam emails there need to be a method to restructure the data in such a way that will make it simpler to analyze.

This article discusses an unsupervised2 machine-learning3 algorithm called principal component analysis4 (PCA) that can be used to simplify the data. It also describes how Intel® Data Analytics Acceleration Library (Intel® DAAL)5 helps optimize this algorithm to improve the performance when running it on systems equipped with Intel® Xeon® processors.

What is Principal Component Analysis?

PCA is a popular data analysis method. It is used to reduce the complexity of the data without losing its properties to make it easier to visualize and analyze. Reducing the complexity of the data means reducing the original dimensions to lesser dimensions while preserving the important features of the original datasets. It is normally used as a pre-step of machine learning algorithms like K-means6, resulting in simpler modeling and thus improving performance.

Figures 1–3 illustrate how the PCA algorithm works. To simplify the problem, let’s limit the scope to two-dimensional space.


Figure 1. Original dataset layout.

Figure 1 shows the objects of the dataset. We want to find the direction where the variance is maximal.


Figure 2. The mean and the direction with maximum variance.

Figure 2 shows the mean of the dataset and the direction with maximum variance. The first direction with the maximal variance is call the first principal component.


Figure 3. Finding the next principal component.

Figure 3 shows the next principal component. The next principal component is the direction where the variance is the second most maximal. Note that the second direction is orthonormal to the first direction.

Figure 4–6 shows how the PCA algorithm is used to reduce the dimensions.


Figure 4. Re-orientating the graph.

Figure 4 shows the new graph after rotating it so that the axis (P1) corresponding to the first principal component becomes a horizontal axis.


Figure 5. Projecting the objects to the P1 axis.

In Figure 5 the whole graph has been rotated so that the axis (P1) corresponding to the first principal component become a horizontal axis.


Figure 6. Reducing from two dimensions to one dimension.

Figure 6 shows the effect of using PCA to reduce from two dimensions (P1 and P2) to one dimension (P1) base on the maximal variance. Similarly, this same concept is used on multi-dimensional datasets to reduce their dimensions while still maintaining much of their characteristics by dropping dimensions with lower variances.

Information about PCA mathematical representation can be found at references 7 and 8.

Applications of PCA

PCA applications include the following:

  • Detecting DoS and network probe attacks
  • Image compression
  • Pattern recognition
  • Analyzing medical imaging

Pros and Cons of PCA

The following lists some of the advantages and disadvantages of PCA.

  • Pros
    • Fast algorithm
    • Shows the maximal variance of the data
    • Reduces the dimension of the origin data
    • Removes noise.
  • Cons
    • Non-linear structure is hard to model with PCA

Intel® Data Analytics Acceleration Library

Intel DAAL is a library consisting of many basic building blocks that are optimized for data analytics and machine learning. These basic building blocks are highly optimized for the latest features of the latest Intel® processors. More about Intel DAAL can be found at reference 5.

The next section shows how to use PCA with PyDAAL, the Python* API of Intel DAAL. To install PyDAAL, follow the instructions in reference 9.

Using the PCA Algorithm in Intel Data Analytics Acceleration Library

To invoke the PCA algorithm in Python10 using Intel DAAL, do the following steps:

  1. Import the necessary packages using the commands from and import
    1. Import the necessary functions for loading the data by issuing the following command:
      from daal.data_management import HomogenNumericTable
    2. Import the PCA algorithm using the following commands:
      import daal.algorithms.pca as pca
    3. Import numpy for calculation.
      Import numpy as np
  2. Import the createSparseTable function to create a numeric table to store input data reading from a file.
    from utils import createSparseTable
  3. Load the data into the data set object declared above.
     dataTable = createSparseTable(dataFileName)
    Where dataFileName is the name of the input .csv data file
  4. Create an algorithm object for PCA using the correlation method.
    pca_alg = pca.Batch_Float64CorrelationDense ()
    Note: if we want to use the svd (single value decomposition) method, we can use the following command:
    pca = pca.Batch_Float64SvdDense()
  5. Set the input for the algorithm.
    pca_alg.input.setDataset(pca.data, dataTable)
  6. Compute the results.
    result = pca_alg.compute()
    The results can be retrieved using the following commands:
    result.get(pca.eigenvalues)
    result.get(pca.eigenvectors)

Conclusion

PCA is one of the simplest unsupervised machine-learning algorithms that is used to reduce the dimensions of a dataset. Intel DAAL contains an optimized version of the PCA algorithm. With Intel DAAL, you don’t have to worry about whether your applications will run well on systems equipped with future generations of Intel Xeon processors. Intel DAAL will automatically take advantage of new features in new Intel Xeon processors. All you need to do is link your applications to the latest version of Intel DAAL.

References

1. Denial of service attacks

2. Unsupervised learning

3. Wikipedia – machine learning

4. Principal component analysis

5. Introduction to Intel DAAL

6. K-means algorithm

7. Principal component analysis for machine learning

8. Principal component analysis tutorial

9. How to install Intel’s distribution for Python

10. Python website

Intel® XDK FAQs - Debug & Test

$
0
0

Why is the Debug tab being deprecated and removed from the Intel XDK?

The Debug tab is being retired because, as previously announced and noted in the release notes, future editions of the Intel XDK will focus on the development of IoT (Internet of Things) apps and IoT mobile companion apps. Since we introduced the Intel XDK IoT Edition in September of 2014, the need for accessible IoT app development tools has increased dramatically. At the same time, HTML5 mobile app development tools have matured significantly. Given the maturity of the free open-source HTML5 mobile app development tools, we feel you are best served by using those tools directly.

Similar reasoning applies to the hosted weinre server (on the Test tab) and the Live Development Pane on the Develop tab.

How do I do "rapid debugging" with remote CDT (or weinre or remote Web Inspector) in a built app?

Attempting to debug a built mobile app (with weinre, remote CDT or Safari Web Inspector) seems like a difficult or impossible task. There are, in fact, many things you can do with a built app that do not require rebuilding and reinstalling your app between each source code change.

You can continue to use the Simulate tab for debugging that does not depend on third-party plugins. Then switch to debugging a built app when you need to deal with third-party plugin issues that cannot be resolved using the Simulate tab. The best place to start is with a built Android app installed directly on-device, which provides full JavaScript and CSS debugging, by way of remote Chrome* DevTools*. For those who have access to a Mac, it is also possible to use remote web inspector with Safari to debug a built iOS app. Alternatively, you can use weinre to debug a built app by installing a weinre server directly onto your development system. For additional help on using weinre locally, watch Using WEINRE to Debug an Intel® XDK Cordova* App (beginning at about 14:30 in the video).

The interactive JavaScript console is your "best friend" when debugging with remote CDT, remote Web Inspector or weinre in a built app. Watch this video from ~19:30 for a technique that shows how to modify code during your debug session, without requiring a rebuild and reinstall of your app, via the JavaScript debug console. The video demonstrates this technique using weinre, but the same technique can also be used with a CDT console or a Web Inspector console.

Likewise, use the remote CDT CSS editor to try manipulate CSS rules in order to figure out how to best style your UI. Or, use the Simulate tab or the Brackets* Live Preview feature. The Brackets Live Preview feature utilizes your desktop browser to provide a feature similar to Intel XDK Live Layout Editing. If you use the Google* Chrome browser with Brackets Live Preview you can use the Chrome device emulation feature to simulate a variety of customizable device viewports.

The Intel XDK is not generating a debug module or is not starting my debug module.

There are a variety of things that can go wrong when attempting to use the Debug tab:

  • your test device cannot be seen by the Debug tab:
  • the debug module build fails 
  • the debug module builds, but fails to install onto your test device 
  • the debug module builds and installs, but fails to "auto-start" on your test device 
  • your test device has run out of memory or storage and needs to be cleared
  • there is a problem with the adb server on your development system

Other problems may also arise, but the above list represents the most common. Search this FAQ and the forum for solutions to these problems. Also, see the Debug tab documentation for some help with installing and configuring your system to use the adb debug driver with your device.

What are the requirements for Testing on Wi-Fi?

  1. Both Intel XDK and App Preview mobile app must be logged in with the same user credentials.
  2. Both devices must be on the same subnet.

Note: Your computer's Security Settings may be preventing Intel XDK from connecting with devices on your network. Double check your settings for allowing programs through your firewall. At this time, testing on Wi-Fi does not work within virtual machines.

How do I configure app preview to work over Wi-Fi?

  1. Ensure that both Intel XDK and App Preview mobile app are logged in with the same user credentials and are on the same subnet
  2. Launch App Preview on the device
  3. Log into your Intel XDK account
  4. Select "Local Apps" to see a list of all the projects in Intel XDK Projects tab
  5. Select desired app from the list to run over Wi-Fi

Note: Ensure the app source files are referenced from the right source directory. If it isn't, on the Projects Tab, change the 'source' directory so it is the same as the 'project' directory and move everything in the source directory to the project directory. Remove the source directory and try to debug over local Wi-Fi.

How do I clear app preview cache and memory?

[Android*] Simply kill the app running on your device as an Active App on Android* by swiping it away after clicking the "Recent" button in the navigation bar. Alternatively, you can clear data and cache for the app from under Settings App > Apps > ALL > App Preview.

[iOS*] By double tapping the Home button then swiping the app away.

[Windows*] You can use the Windows* Cache Cleaner app to do so.

What are the Android* devices supported by App Preview?

We officially only support and test Android* 4.x and higher, although you can use Cordova for Android* to build for Android* 2.3 and above. For older Android* devices, you can use the build system to build apps and then install and run them on the device to test. To help in your testing, you can include the weinre script tag from the Test tab in your app before you build your app. After your app starts up, you should see the Test tab console light up when it sees the weinre script tag contact the device (push the "begin debugging on device" button to see the console). Remember to remove the weinre script tag before you build for the store.

What do I do if Intel XDK stops detecting my Android* device?

Conflicts between different versions of adb can cause device detection issues. 

Ensure that all applications, such as Eclipse, Chrome, Firefox, Android Studio and other Android mobile development tools are not running on your workstation. Exit the Intel XDK and kill all adb processes that are running on your workstation. Restart the Intel XDK only after you have killed all instances of adb on your workstation. 

You can scan your disk for copies of adb using the following command lines:

[Linux*/OS X*]:

$ sudo find / -name adb -type f 

[Windows*]:

> cd \> dir /s adb.exe

For more information regarding Android* USB debug, visit the Intel XDK documentation on debugging and testing.

How do I debug an app that contains third party Cordova plugins?

See the Debug and Test Overview doc page for a more complete overview of your debug options.

When using the Test tab with Intel App Preview your app will not include any third-party plugins, only the "core" Cordova plugins.

The Emulate tab will load the JavaScript layer of your third-party plugins, but does not include a simulation of the native code part of those plugins, so it will present you with a generic "return" dialog box to allow you to execute code associated with third-party plugins.

When debugging Android devices with the Debug tab, the Intel XDK creates a custom debug module that is then loaded onto your USB-connected Android device, allowing you to debug your app AND its third-party Cordova plugins. When using the Debug tab with an iOS device only the "core" Cordova plugins are available in the debug module on your USB-connected iOS device.

If the solutions above do not work for you, then your best bet for debugging an app that contains a third-party plugin is to build it and debug the built app installed and running on your device. 

[Android*]

1) For Crosswalk* or Cordova for Android* build, create an intelxdk.config.additions.xml file that contains the following lines:

<!-- Change the debuggable preference to true to build a remote CDT debuggable app for --><!-- Crosswalk* apps on Android* 4.0+ devices and Cordova apps on Android* 4.4+ devices. --><preference name="debuggable" value="true" /><!-- Change the debuggable preference to false before you build for the store. --> 

and place it in the root directory of your project (in the same location as your other intelxdk.config.*.xml files). Note that this will only work with Crosswalk* on Android* 4.0 or newer devices or, if you use the standard Cordova for Android* build, on Android* 4.4 or greater devices.

2) Build the Android* app

3) Connect your device to your development system via USB and start app

4) Start Chrome on your development system and type "chrome://inspect" in the Chrome URL bar. You should see your app in the list of apps and tabs presented by Chrome, you can then push the "inspect" link to get a full remote CDT session to your built app. Be sure to close Intel XDK before you do this, sometimes there is interference between the version of adb used by Chrome and that used by Intel XDK, which can cause a crash. You might have to kill the adb process before you start Chrome (after you exit the Intel XDK).

[iOS*]

Refer to the instructions on the updated Debug tab docs to get on-device debugging. We do not have the ability to build a development version of your iOS* app yet, so you cannot use this technique to build iOS* apps. However, you can use the weinre script from the Test tab into your iOS* app when you build it and use the Test tab to remotely access your built iOS* app. This works best if you include a lot of console.log messages.

[Windows* 8]

You can use the test tab which would give you a weinre script. You can include it in the app that you build, run it and connect to the weinre server to work with the console.

Alternatively, you can use App Center to setup and access the weinre console (go here and use the "bug" icon).

Another approach is to write console.log messages to a <textarea> screen on your app. See either of these apps for an example of how to do that:

Why does my device show as offline on Intel XDK Debug?

“Media” mode is the default USB connection mode, but due to some unidentified reason, it frequently fails to work over USB on Windows* machines. Configure the USB connection mode on your device for "Camera" instead of "Media" mode.

What do I do if my remote debugger does not launch?

You can try the following to have your app run on the device via debug tab:

  • Place the intelxdk.js library before the </body> tag
  • Place your app specific JavaScript files after it
  • Place the call to initialize your app in the device ready event function

Why do I get an "error installing App Preview Crosswalk" message when trying to debug on device?

You may be running into a RAM or storage problem on your Android device; as in, not enough RAM available to load and install the special App Preview Crosswalk app (APX) that must be installed on your device. See this site (http://www.devicespecifications.com) for information regarding your device. If your device has only 512 MB of RAM, which is a marginal amount for use with the Intel XDK Debug tab, you may have difficulties getting APX to install.

You may have to do one or all of the following:

  • remove as many apps from RAM as possible before installing APX (reboot the device is the simplest approach)
  • make sure there is sufficient storage space in your device (uninstall any unneeded apps on the device)
  • install APX by hand

The last step is the hardest, but only if you are uncomfortable with the command-line:

  1. while attempting to install APX (above) the XDK downloaded a copy of the APK that must be installed on your Android device
  2. find that APK that contains APX
  3. install that APK manually onto your Android device using adb

To find the APK, on a Mac:

$ cd ~/Library/Application\ Support/XDK
$ find . -name *apk

To find the APK, on a Windows machine:

> cd %LocalAppData%\XDK> dir /s *.apk

For each version of Crosswalk that you have attempted to use (via the Debug tab), you will find a copy of the APK file (but only if you have attempted to use the Debug tab and the XDK has successfully downloaded the corresponding version of APX). You should find something similar to:

./apx_download/12.0/AppAnalyzer.apk

following the searches, above. Notice the directory that specifies the Crosswalk version (12.0 in this example). The file named AppAnalyzer.apk is APX and is what you need to install onto your Android device.

Before you install onto your Android device, you can double-check to see if APX is already installed:

  • find "Apps" or "Applications" in your Android device's "settings" section
  • find "App Preview Crosswalk" in the list of apps on your device (there can be more than one)

If you found one or more App Preview Crosswalk apps on your device, you can see which versions they are by using adb at the command-line (this assumes, of course, that your device is connected via USB and you can communicate with it using adb):

  1. type adb devices at the command-line to confirm you can see your device
  2. type adb shell 'pm list packages -f' at the command-line
  3. search the output for the word app_analyzer

The specific version(s) of APX installed on your device end with a version ID. For example:com.intel.app_analyzer.v12 means you have APX for Crosswalk 12 installed on your device.

To install a copy of APX manually, cd to the directory containing the version of APX you want to install and then use the following adb command:

$ adb install AppAnalyzer.apk

If you need to remove the v12 copy of APX, due to crowding of available storage space, you can remove it using the following adb command:

$ adb uninstall com.intel.app_analyzer.v12

or

$ adb shell am start -a android.intent.action.DELETE -d package:com.intel.app_analyzer.v12

The second one uses the Android undelete tool to remove the app. You'll have to respond to a request to undelete on the Android device's screen. See this SO issue for details. Obviously, if you want to uninstall a different version of APX, specify the package ID corresponding to that version of APX.

Why is Chrome remote debug not working with my Android or Crosswalk app?

For a detailed discussion regarding how to use Chrome on your desktop to debug an app running on a USB-connected device, please read this doc page Remote Chrome* DevTools* (CDT).

Check to be sure the following conditions have been met:

  • The version of Chrome on your desktop is greater than or equal to the version of the Chrome webview in which you are debugging your app.

    For example, Crosswalk 12 uses the Chrome 41 webview, so you must be running Chrome 41 or greater on your desktop to successfully attach a remote Chrome debug session to an app built with Crosswalk 12. The native Chrome webview in an Android 4.4.2 device is Chrome 30, so your desktop Chrome must be greater than or equal to Chrome version 30 to debug an app that is running on that native webview.
  • Your Android device is running Android 4.4 or higher, if you are trying to remote debug an app running in the device's native webview, and it is running Android 4.0 or higher if you are trying to remote debug an app running Crosswalk.

    When debugging against the native webview, remote debug with Chrome requires that the remote webview is also Chrome; this is not guaranteed to be the case if your Android device does not include a license for Google services. Some manufacturers do not have a license agreement with Google for distribution of the Google services on their devices and, therefore, may not include Chrome as their native webview, even if they are an Android 4.4 or greater device.
  • Your app has been built to allow for remote debug.

    Within the intelxdk.config.additions.xml file you must include this line: <preference name="debuggable" value="true" /> to build your app for remote debug. Without this option your app cannot be attached to for remote debug by Chrome on your desktop.

How do I detect if my code is running in the Emulate tab?

In the obsolete intel.xdk apis there is a property you can test to detect if your app is running within the Emulate tab or on a device. That property is intel.xdk.isxdk. A simple alternative is to perform the following test:

if( window.tinyHippos )

If the test passes (the result is true) you are executing in the Emulate tab.

Never ending "Transferring your project files to the Testing Device" message from Debug tab; results in no Chrome DevTools debug console.

This is a known issue but a resolution for the problem has not yet been determined. If you find yourself facing this issue you can do the following to help resolve it.

On a Windows machine, exit the Intel XDK and open a "command prompt" window:

> cd %LocalAppData%\XDK\> rmdir cdt_depot /s/q

On a Mac or Linux machine, exit the Intel XDK and open a "terminal" window:

$ find ~ -name global-settings.xdk
$ cd <location-found-above>
$ rm -Rf cdt_depot

Restart the Intel XDK and try the Debug tab again. This procedure is deleting the cached copies of the Chrome DevTools that were retrieved from the corresponding App Preview debug module that was installed on your test device.

One observation that causes this problem is the act of removing one device from your USB and attaching a new device for debug. A workaround that helps sometimes, when switching between devices, is to:

  • switch to the Develop tab
  • close the XDK
  • detach the old device from the USB
  • attach the new device to your USB
  • restart the XDK
  • switch to the Debug tab

Can you integrate the iOS Simulator as a testing platform for Intel XDK projects?

The iOS simulator only runs on Apple Macs... We're trying to make the Intel XDK accessible to developers on the most popular platforms: Windows, Mac and Linux. Additionally, the iOS simulator requires a specially built version of your app to run, you can't just load an IPA onto it for simulation.

What is the purpose of having only a partial emulation or simulation in the Emulate tab?

There's no purpose behind it, it's simply difficult to emulate/simulate every feature and quirk of every device.

Not everyone can afford hardware for testing, especially iOS devices; what can I do?

You can buy a used iPod and that works quite well for testing iOS apps. Of course, the screen is smaller and there is no compass or phone feature, but just about everything else works like an iPhone. If you need to do a lot of iOS testing it is worth the investment. A new iPod costs $200 in the US. Used ones should cost less than that. Make sure you get one that can run iOS 8.

Is testing on Crosswalk on a virtual Android device inside VirtualBox good enough?

When you run the Android emulator you are running on a fictitious device, but it is a better emulation than what you get with the iOS simulator and the Intel XDK Emulate tab. The Crosswalk webview further abstracts the system so you get a very good simulation of a real device. However, considering how inexpensive and easy Android devices are to obtain, we highly recommend you use a real device (with the Debug tab), it will be much faster and even more accurate than using the Android emulator.

Why isn't the Intel XDK emulation as good as running on a real device?

Because the Intel XDK Emulate tab is a Chromium browser, so what you get is the behavior inside that Chromium browser along with some conveniences that make it appear to be a hybrid device. It's poorly named as an emulator, but that was the name given to it by the original Ripple Emulator project. What it is most useful for is simulating most of the core Cordova APIs and your basic application logic. After that, it's best to use real devices with the Debug tab.

Why doesn't my custom splash screen does not show in the emulator or App Preview?

Ensure the splash screen plugin is selected. Custom splash screens only get displayed on a built app. The emulator and app preview will always use Intel XDK splash screens. Please refer to the 9-Patch Splash Screen sample for a better understanding of how splash screens work.

Is there a way to detect if my program has stopped due to using uninitialized variable or an undefined method call?

This is where the remote debug features of the Debug tab are extremely valuable. Using a remote CDT (or remote Safari with a Mac and iOS device) are the only real options for finding such issues. WEINRE and the Test tab do not work well in that situation because when the script stops WEINRE stops.

Why doesn't the Intel XDK go directly to Debug assuming that I have a device connected via USB?

We are working on streamlining the debug process. There are still obstacles that need to be overcome to insure the process of connecting to a device over USB is painless.

Can a custom debug module that supports USB debug with third-party plugins be built for iOS devices, or only for Android devices?

The Debug tab, for remote debug over USB can be used with both Android and iOS devices. Android devices work best. However, at this time, debugging with the Debug tab and third-party plugins is only supported with Android devices (running in a Crosswalk webview). We are working on making the iOS option also support debug with third-party plugins, like what you currently get with Android.

Why does my Android debug session not start when I'm using the Debug tab?

Some Android devices include a feature that prevents some applications and services from auto-starting, as a means of conserving power and maximizing available RAM. On Asus devices, for example, there is an app called the "Auto-start Manager" that manages apps that include a service that needs to start when the Android device starts.

If this is the case on your test device, you need to enable the Intel App Preview application as an app that is allowed to auto-start. See the image below for an example of the Asus Auto-start Manager:

Another thing you can try is manually starting Intel App Preview on your test device before starting a debug session with the Debug tab.

How do I share my app for testing in App Preview?

The only way to retrieve a list of apps in App Preview is to login. If you do not wish to share your credentials, you can create an alternate account and push your app to the cloud using App Preview and share that account's credentials, instead.

I am trying to use Live Layout Editing but I get a message saying Chrome is not installed on my system.

The Live Layout Editing feature of the Intel XDK is built on top of the Brackets Live Preview feature. Most of the issues you may experience with Live Layout Editing can be addressed by reviewing this Live Preview Isn't Working FAQ from the Brackets Troubleshooting wiki. In particular, see the section regarding using Chrome with Live Preview.

My AJAX or XHR or Angular $http calls are returning an incorrect return code in App Preview.

Some versions of App Preview include an XHR override library that is designed to deal with issues related to loading file:// URLs outside of the local app filesystem (this is something that is unique to App Preview). Unfortunately, this override appears to cause problems with the return codes for some AJAX, XHR and Angular $http calls. This XHR special handling code can be disabled by adding a "data-noxhrfix" property to your app's <head> tag, in your app's index.html file. For example:

<!DOCTYPE html><html><head data-noxhrfix><meta charset="UTF-8">
...

This override should only apply to situations where the result status is zero and the responseURL is not empty.

Back to FAQs Main

SOME METHODOLOGIES TO OPTIMIZE YOUR VR APPLICATIONS POWER ON INTEL® PLATFORM

$
0
0

As VR becomes a popular consumer product, more and more VR contents come out. From recent investment, lots of users love VR devices without wires, like AIO devices or Mobile devices. For these devices, it is not charging during playing so developers need to take special care of application power.

For details, please see the attachments.

How AisaInfo ADB* Improves Performance with Intel® Xeon® Processor-Based Systems

$
0
0

Background

Supporting high online transaction volumes in real time, especially at peak time, can be challenging for telecom and financial services. To ensure uninterrupted service and a good customer experience, telecom and financial companies are constantly looking for ways to improve their services by enhancing their applications and systems.

AsiaInfo1 ADB* is a scalable online transaction processing2 database targeted for high-performance and mission-critical businesses such as online charge service3 (OCS). AsiaInfo ADB provides high performance, high availability, and scalability by clustering multiple servers.

This article describes how AsiaInfo ADB was able to take advantage of features like Intel® Advanced Vector Extensions 2 (Intel® AVX2)4 and Intel® Transactional Synchronization Extensions (Intel® TSX)5 as well as faster Intel® Solid State Drive hard disks to improve its performance when running on systems equipped with the latest generation of Intel® Xeon® processors.

AisaInfo ADB on Intel® Xeon® Processor-Based Systems

AsiaInfo engineers modified the ADB code by replacing the “self-implemented” spin lock to pthread_rwlock_wrlock in the GNU* C library6 (glibc). The function pthread_rwlock_wrlock can be configured to enable or disable Intel TSX with the environmental variable. With the new ADB version using glibc lock, when Intel TSX is enabled, the performance improves as shown in Figure 1 as compared to that of the original ADB version using the self-implemented lock.

For customers with limited disk space and cannot be expanded, they can enable the compress function. The ADB data compression function can save disk space by compressing data before writing to disk. This function is CPU intensive and impacts database performance. In order to do that, AsiaInfo engineers modified the ADB compression module using the Intel AVX2 intrinsic instructions.

New Intel Xeon processors like the Intel® Xeon® processor E7 v4 family provide more cores (24 compared to 18) and larger cache size (60 MB compared to 45 MB) compared to the previous generation of Intel® Xeon® processors E7 v3 family. More cores and larger cache size allow more transactions to be served within the same amount of time.

The next section shows how we tested the AsiaInfo ADB workload to compare the performance between the current generation of Intel Xeon processors E7 v4 family and those of the previous generation of Intel Xeon processors E7 v3 family.

Performance Test Procedure

We performed tests on two platforms. One system was equipped with the Intel® Xeon® processor E7-8890 v3 and the other with the Intel® Xeon® processor E7-8890 v4. We wanted to see how Intel TSX, Intel AVX2, and faster solid state drives (SSDs) affect performance.

Test Configuration

System equipped with the quad-socket Intel Xeon processor E7-8890 v4

  • System: Preproduction
  • Processors: Intel Xeon processor E7-8890 v4 @2.2 GHz
  • Cache: 60 MB
  • Cores: 24
  • Memory: 256 GB DDR4-1600 LV DIMM
  • SSD: Intel® SSD DC S3700 Series, Intel SSD DC P3700 Series

System equipped with the quad-socket Intel Xeon processor E7-8890 v3

  • System: Preproduction
  • Processors: Intel Xeon processor E5-2699 v3 @2.5 GHz
  • Cache: 45 MB
  • Cores: 18
  • Memory: 256 GB DDR4-1600 LV DIMM
  • SSD: Intel SSD DC S3700 Series, Intel SSD DC P3700 Series

Operating system:

  • Ubuntu* 15.10 - kernel 4.2

Software:

  • Glibc 2.21

Application:

  • ADB v1.1
  • AsiaInfo ADB OCS ktpmC workload

Test Results

Intel® Transactional Synchronization Extensions
Figure 1: Comparison between the application using the Intel® Xeon® processor E7-8890 v3 and the Intel® Xeon® processor E7-8890 v4 when Intel® Transactional Synchronization Extensions is enabled.

Figure 1 shows that the performance improved by 22 percent with Intel TSX enabled when running the application on systems equipped with Intel Xeon processor E7-8890 v4 compared to that of the Intel Xeon processor E7-8890 v3.

 

Intel® Advanced Vector Extensions 2
Figure 2: Performance improvement using Intel® Advanced Vector Extensions 2.

Figure 2 shows the data compression module performance improved by 34 percent when Intel AVX2 is enabled. This test was performed on the Intel® Xeon® processor E7-8890 v4.

 

Performance comparison between different Intel® SSDs
Figure 3: Performance comparison between different Intel® SSDs.

Figure 3 shows the performance improvement of the application using faster Intel SSDs. In this test case, replacing the Intel SSD DC S3700 Series with the Intel® SSD DC P3700 Series gained 58 percent in performance. Again, this test was performed on the Intel® Xeon® processor E7-8890 v4.

 

Conclusion

AsisInfo ADB gains more performance by taking advantage of Intel TSX and Intel AVX2 as well as better platform capabilities such as more cores and larger cache size resulting in improved customer experiences.

References

  1. AsiaInfo company information
  2. Online transaction processing
  3. Online charging system
  4. Intel AVX2
  5. Intel TSX
  6. GNU Library C

Lindsays TEST article Zero Theme

$
0
0

class="button-cta"

Class Collapse List (3up)

Here is a caption for all of the 3-up images - it is inside the outer div end tag.

 :

Images in a Plain P with a Space Between Each

  
Here is a caption for all of the 3-up images - it is inside the p end tag, preceded by a break.

 

Image Floats

Image class="floatLeft"

 

Image class="floatRight"

 

Image class="half-float-left"

 

Image class="half-float-right"

 

Other float classes:
class="one-third-float-left"
class="one-third-float-right"
class="one-quarter-float-left"
class="one-quarter-float-right"

Clear floats with p class="clearfix"

 

Text types - h1 thru h4 are their own tags; remainder are in a <p>.

H1 Header

H2 Header

H3 Header

H3 Grey (class="grey-heading")

H4 Header

Strong Text

Italic Text

Plain Text

SuperscriptText

SubscriptText

 


 

This paragraph is style="margin-left:.5in;" - or is it?

 

Lists

Standard Unordered

  • Item 1
    • Sub-list (ul)
    • Sub 2
  • Item 2
    1. Sub-list (ol)
    2. Sub 2
  • Item 3

Standard Ordered

  1. Item 1
    • Sub-list (ul)
    • Sub 2
  2. Item 2
    1. Sub-list (ol)
    2. Sub 2
  3. Item 3

Special styles - do they work?

None

  • First (style="list-style-type: none;")
  • Second
  • Third
  • Fourth

Lower Alpha

  1. Arizona (ol style="list-style-type:lower-alpha)
    1. Phoenix
    2. Tucson
  2. Florida
  3. Hawaii

Lower Roman

  1. Alpha (ol style="list-style-type:lower-roman;")
  2. Bravo
  3. Charlie

Other Start

  1. Red (ol start="4")
  2. Blue
  3. Green

 

Inline Code

THE OAK AND THE REEDS

An Oak that grew on the bank of a river was uprooted by a severe gale of wind, and thrown across the stream.

It fell among some Reeds growing by the water, and said to them, "How is it that you, who are so frail and slender, have managed to weather the storm, whereas I, with all my strength, have been torn up by the roots and hurled into the river?""You were stubborn," came the reply, "and fought against the storm, which proved stronger than you: but we bow and yield to every breeze, and thus the gale passed harmlessly over our heads."

 

Sample Code Blocks

class="brush:cpp;"

float depthBuffer  = DepthBuffer.Sample( SAMPLER0, screenPosUV ).r;
float div          = Near/(Near-Far);
float depth        = (Far*div)/(div-depthBuffer);
uint  indexAtDepth = uint(totalRaymarchCount * (depth-zMax)/(zMin-zMax));

 

class="brush:java;"

  azureTable.setDefaultClient({
    accountUrl: 'https://' + this.accountName + '.table.core.windows.net/',
    accountName: this.accountName,
    accountKey: this.config.accessKey,
    timeout: 10000
  });

 

class="brush:plain;"

float depthBuffer  = DepthBuffer.Sample( SAMPLER0, screenPosUV ).r;
float div          = Near/(Near-Far);
float depth        = (Far*div)/(div-depthBuffer);
uint  indexAtDepth = uint(totalRaymarchCount * (depth-zMax)/(zMin-zMax));

 

Image Width Test

 

Tables

class="no-alternate" (also has style="width: 100%;")

BaltimoreLondonParisTokyo
Maryland Crab CakesFish and Chips with Mushy PeasBoeuf BourgogneTonkatsu
National AquariumTate ModernEiffel TowerKitanomaru Park

class="all-grey"

BaltimoreLondonParisTokyo
Maryland Crab CakesFish and Chips with Mushy PeasBoeuf BourgogneTonkatsu
National AquariumTate ModernEiffel TowerKitanomaru Park

class="grey-alternating-rows"

BaltimoreLondonParisTokyo
Maryland Crab CakesFish and Chips with Mushy PeasBoeuf BourgogneTonkatsu
National AquariumTate ModernEiffel TowerKitanomaru Park

class="alt-col" (this format is the default, when a table has no class)

BaltimoreLondonParisTokyo
Maryland Crab CakesFish and Chips with Mushy PeasBoeuf BourgogneTonkatsu
National AquariumTate ModernEiffel TowerKitanomaru Park

Lindsays TEST article IDZone

$
0
0

class="button-cta"

Yes, you can embed a YouTube video in an article! Remember to use the "http://www.youtube.com/watch?v=" URL, and not "https" or "youtu.be" formats.

How about a larger size of the same video? NO. iFrame not available in an article.

 

Class Collapse List (2up)


Figure 1. Data set layout.

Figure 1 shows the objects of the data set are all over the space.


Figure 2. Initial positions of the centroids.

Figure 2 shows the initial positions of the centroids. In general, these initial positions are chosen randomly, preferably as far apart from each other as possible.


Figure 3. New positions of the centroids after one iteration.

Figure 3 shows the new positions of the centroids. Note that the two lower centroids are re-adjusted to be closer to the two lower chunks of objects


Figure 4. New positions of the centroids in subsequent iterations.

Figure 4 shows the new positions of the centroids after many iterations. Note that the positions of the centroids don’t vary too much compared to those in Figure 3. Since the positions of the centroids are stabilized, the algorithm will stop running and consider those positions final.


Figure 5.

Figure 5 shows that the data set has been grouped into three separate clusters.

 

Class Collapse List (3up)

Here is a caption for all of the 3-up images - it is inside the outer div end tag.

 :

Images in a Plain P with a Space Between Each

  
Here is a caption for all of the 3-up images - it is inside the p end tag, preceded by a break.

 

Class Collapse List (4up)

Here is a caption for all of the 4-up images - it is inside the outer div end tag.

 

Figure 3 (Left) - Level Up Winners demonstrating on the Razer Blade Stealth, (Right) 4 person local multiplayer on Core i7 powered Intel NUC (codenamed Skull Canyon)

Around the show floor, game devs took advantage of our Demo Depot Rentals program which offers game devs equipment and on-site support at very competitive rates. In addition to the nearly dozen or so studios around the floor with our rental hardware, our sponsorship of Indie Mega Booth loaned in about 40 TVs and helped offset the cost of booth for some deserving teams.

Figure 4 (Upper Left) - Interabang, Supertype and High Horse all showing their games in The MIX space. (Upper Right) - SMG showing Death Squared in MegaBooth (Lower Left) Surprise Attack showing as part of the PAX AUS Roadshow on the 6th floor (Lower Right) Vlambeer showed all 12 of their games

 

Image Floats

Image class="floatLeft"

 

Image class="floatRight"

 

Image class="half-float-left"

 

Image class="half-float-right"

 

Other float classes:
class="one-third-float-left"
class="one-third-float-right"
class="one-quarter-float-left"
class="one-quarter-float-right"

Clear floats with p class="clearfix"

 

Text types - h1 thru h4 are their own tags; remainder are in a <p>.

H1 Header

H2 Header

H3 Header

H3 Grey (class="grey-heading")

H4 Header

Strong Text

Italic Text

Plain Text

SuperscriptText

SubscriptText

 


 

Text styles

This is Intel Clear class in the paragraph.

Can I specify Intel Clear font in a span? WYSIWYG says Yes.

Can I specify Intel Clear font in the paragraph? WYSIWYG says Yes.

Can I specify Courier New font in a span? WYSIWYG says Yes.

Can I specify Courier New font in the paragraph? WYSIWYG says Yes.

Can I specify 44px font size in a span? WYSIWYG says Yes.

Can I specify 44px font size in the paragraph? WYSIWYG says Yes.

Can I specify 400% font size in a span? WYSIWYG says Yes.

Can I specify 400% font size in the paragraph? WYSIWYG says Yes.

Can I specify #ff1493 font color in a span? WYSIWYG says Yes.

Can I specify #ff1493 font color in the paragraph? WYSIWYG says No.

Can I specify deeppink font color in a span? WYSIWYG says Yes.

Can I specify #ff1493 deeppink font color in the paragraph? WYSIWYG says No.

Can I specify style="margin-left:.5in;" in a span? WYSIWYG says Yes.

Can I specify style="margin-left:.5in;" in the paragraph? WYSIWYG says Yes.

 

Lists

Standard Unordered

  • Item 1
    • Sub-list (ul)
    • Sub 2
  • Item 2
    1. Sub-list (ol)
    2. Sub 2
  • Item 3

Standard Ordered

  1. Item 1
    • Sub-list (ul)
    • Sub 2
  2. Item 2
    1. Sub-list (ol)
    2. Sub 2
  3. Item 3

Special styles - do they work?

None

  • First (style="list-style-type: none;")
  • Second
  • Third
  • Fourth

Lower Alpha

  1. Arizona (ol style="list-style-type:lower-alpha)
    1. Phoenix
    2. Tucson
  2. Florida
  3. Hawaii

Lower Roman

  1. Alpha (ol style="list-style-type:lower-roman;")
  2. Bravo
  3. Charlie

Other Start

  1. Red (ol start="4")
  2. Blue
  3. Green

 

Inline Code

THE OAK AND THE REEDS

An Oak that grew on the bank of a river was uprooted by a severe gale of wind, and thrown across the stream.

It fell among some Reeds growing by the water, and said to them, "How is it that you, who are so frail and slender, have managed to weather the storm, whereas I, with all my strength, have been torn up by the roots and hurled into the river?""You were stubborn," came the reply, "and fought against the storm, which proved stronger than you: but we bow and yield to every breeze, and thus the gale passed harmlessly over our heads."

 

Sample Code Blocks

class="brush:cpp;"

float depthBuffer  = DepthBuffer.Sample( SAMPLER0, screenPosUV ).r;
float div          = Near/(Near-Far);
float depth        = (Far*div)/(div-depthBuffer);
uint  indexAtDepth = uint(totalRaymarchCount * (depth-zMax)/(zMin-zMax));

 

class="brush:java;"

  azureTable.setDefaultClient({
accountUrl: 'https://' + this.accountName + '.table.core.windows.net/',
accountName: this.accountName,
accountKey: this.config.accessKey,
timeout: 10000
});

 

class="brush:plain;"

float depthBuffer  = DepthBuffer.Sample( SAMPLER0, screenPosUV ).r;
float div          = Near/(Near-Far);
float depth        = (Far*div)/(div-depthBuffer);
uint  indexAtDepth = uint(totalRaymarchCount * (depth-zMax)/(zMin-zMax));

 

Image Width Test

 

Tables

class="no-alternate" (also has style="width: 100%;")

BaltimoreLondonParisTokyo
Maryland Crab CakesFish and Chips with Mushy PeasBoeuf BourgogneTonkatsu
National AquariumTate ModernEiffel TowerKitanomaru Park

class="all-grey"

BaltimoreLondonParisTokyo
Maryland Crab CakesFish and Chips with Mushy PeasBoeuf BourgogneTonkatsu
National AquariumTate ModernEiffel TowerKitanomaru Park

class="grey-alternating-rows"

BaltimoreLondonParisTokyo
Maryland Crab CakesFish and Chips with Mushy PeasBoeuf BourgogneTonkatsu
National AquariumTate ModernEiffel TowerKitanomaru Park

class="alt-col" (this format is the default, when a table has no class)

BaltimoreLondonParisTokyo
Maryland Crab CakesFish and Chips with Mushy PeasBoeuf BourgogneTonkatsu
National AquariumTate ModernEiffel TowerKitanomaru Park

Back to Pain Points: Marketing Your Enterprise B2B App

$
0
0

When it comes to marketing your B2B enterprise app, a lot of the groundwork has already been laid. Unlike a consumer app, which may be developed with consumer insights in mind, but still in somewhat of a vacuum—you’ve already been working closely with a select group of potential customers, so you should have a good idea of what they need to hear to move forward. Your understanding of pain points led to the development of a solid product plan—and then the creation of your proof of concept helped refine the product and further strengthen your customer relationships. With all of that insight and information, you’re now ready to scale the product to its final version—and market it to real, paying customers. To get those customers on board, you’ll need to convince them that your product solves a real need, helping workers be more efficient and improving the company’s bottom line.

How to Find and Reach Out to Potential Customers

Some of your potential customers are already familiar to you. Those initial interviewees and proof-of-concept partners will hopefully be ready and eager to convert into long-term customers. But now that you have a solid product, you’re also ready to scale, and that means extending your reach and selling your app to a bigger group, reaching new customers who haven’t yet heard about your product and how it can help them.

  1. Compile a list of organizations that fit your target market.
  2. Do research to find names and contact information for decision makers representing both customer types—users and check writers.
  3. Reach out to those people in any way you can—ask for personal introductions when possible, but also make cold calls and send emails.
  4. Remember that with enterprise B2B, face to face interactions are key. Try to schedule in-person meetings whenever possible.

 

Tip:

Try targeting your efforts to people at the Director level. Directors are great because once they buy in, they can take your product up to C level and down to users, becoming your key sponsor within the company.

Craft Your Messaging

As we’ve said, the main focus of your marketing efforts should be on the pain points, and how your product can provide solutions. But how you approach this will also depend on how well-accepted the pain points you’ve identified are. In other words, is this a universally-acknowledged issue? If so, you’ll need to explain why and how you’ve addressed this issue best. However, if it’s something that’s not as well-accepted or well-understood, you’ll need to start with a fair bit of education around what this pain point is and why they should be looking for ways to solve it.

For example, let’s say that you’ve created a new, simple but highly secure file sharing app designed to help creatives share large files with partners and clients. If it was a few years ago, when organizations weren't particularly savvy to the risks of employees putting company files on a public cloud, then you would've needed to start by educating them about the need for a secure system. However, if you were launching this product today, you'd be able to jump right into why your product’s security features are better than the competition’s.

You will also need to craft distinct messages for your two audiences. As in our example of an e-commerce portal for marketers—when you talk to your key user, or e-commerce analyst, you’ll connect with them on the problem of using multiple programs to update inventory and track sales, and explain how your product will make this process more efficient. For the check writer, you’ll want to focus on how your app increases sales and decreases margins.

Be Open to New Insights

Just because you’ve done careful work in the first two phases, that doesn’t mean that you won’t discover new pain points now. As you talk to more people, you may discover that you need to hone your message for certain verticals, or that a different pain point is really more relevant. You might also determine that there are some new features that should be added to the next version of the product.

Marketing your enterprise B2B app based on paint points means that you’re talking to potential customers about the things that really matter to them. You aren’t selling them a slick new technology, shiny but unnecessary, and you aren’t trying to force a one-size-fits-all solution that doesn’t address their specific business needs. When you sit down in a meeting with someone from your contact list, you’ll be able to demonstrate that you’ve been listening, that you understand the issues they face in their business—and that you’ve created the best product to bring their business forward.

Intel Premier Support Legacy Status Update

$
0
0

Current Status:

December 14, 2016 - Intel Premier Support legacy tool is now up and running.
Thank you for your patience. 

.

Installing Android Things* on Intel® Edison Kit for Arduino*

$
0
0

This document describes how to setup your Intel® Edison kit for Arduino* with Android Things*.

Android Things is an open-source operating system from Google that can run on a wide variety of development boards, including the Intel Edison device. Android Things is based on Android* and the Linux* kernel. For more information about Android Things, see https://developer.android.com/things.
 

Installing Android Things* and Unbricking a Sparkfun* Blocks for Intel® Edison Module

$
0
0

This document describes how to setup your Sparkfun* Blocks for Intel® Edison Module with Android Things*.

Android Things is an open-source operating system from Google that can run on a wide variety of development boards, including the Intel Edison device. Android Things is based on Android* and the Linux kernel. For more information about Android Things, see https://developer.android.com/things.
Viewing all 1201 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>