Quantcast
Channel: Intel Developer Zone Articles
Viewing all 1201 articles
Browse latest View live

Understanding the IoT Ecosystem

$
0
0

Anatomy of the IoT Ecosystem

Wearables and home automation devices dominate the IoT market today, but the overall ecosystem for the IoT will evolve. Figure 1 illustrates a simplified IoT ecosystem:

  • At the left side are the edge devices. These are the IoT end points, and the provide the means to sense and control their environment through sensors and actuators.
  • Gateways aggregate data from edge devices and present it to the cloud (while providing control from the cloud). In some cases, gateways may process data to add value into the ecosystem.
  • The cloud provides the means to store data and perform analytics. The importance of a cloud is a set of resources that can scale up or down in an elastic fashion as a function of need.
  • The cloud enables monetization of the data and control through application programming interfaces and apps (which may or may not reside in the cloud, as well).
  • At the top is management and monitoring across all levels of the ecosystem.
  • At the bottom are technologies that enable development, test, and other critical capabilities, such as end-to-end security (for the data and control planes).


Figure 1. Simplified Internet of Things Ecosystem

Now, let’s look at each part of the IoT ecosystem and which technologies are applied.

Edge

At the edge of the IoT ecosystem are connected devices that can sense and actuate at various levels of complexity. In the wearables space, you’ll find smart wristbands and watches that include biometric sensing. In the automotive space, you’ll find networks of smart devices that cooperatively create a safer and more enjoyable driving experience (through sensors that improve drivetrain efficiency or tune automotive parameters based on altitude or temperature).

In the space of power-conscious wearable devices, you’ll find processors like Intel® Quark™ SoC (in the tiny Intel® Curie™ Compute Module), which can operate from a coin-sized battery, and includes a six-axis combination sensor (accelerometer and gyroscope). For greater processing power, the Intel® Edison compute module supports both single-core and dual-core Intel® Atom™ CPUs. The Intel® Edison board can run Yocto Linux*, so the software ecosystem enabled there creates endless development opportunities (see Figure 2).


Figure 2. The Intel® Curie™ Computer Module and Intel® Edison board

Gateway

When we talk about the IoT, the emphasis is on things—lots of them. For this reason, the gateway is an integral part of the IoT ecosystem, bridging small edge devices that may include little intelligence to the cloud (where the data may be monetized). The gateway can serve one or two primary functions (sometimes both): It can be the bridge that migrates collected edge data to the cloud (and provides control from the cloud), and it can also be a data processor, reducing the mass of available data on its way to the cloud or making decisions immediately based on the data available. For this reason, gateways tend to be more powerful than the edge devices.

The Intel® IoT Gateway is a platform for development of IoT gateway apps (see Figure 3). What makes this a platform is the integration of key communication technologies (including Ethernet, Wi-Fi, Bluetooth*, and ZigBee*, as well as 2G, 3G, and Long-Term Evolution) and sensor/actuator interfaces (RS-232, analog/digital input/output), with processing capability from single-core Intel® Quark SoCs to dual-core and quad-core Intel® Atom™ and Intel® Core™ processors. To simplify development, the Intel IoT Gateway supports Wind River Linux* 7, Windows® 10, or the Snappy Ubuntu* Core (with integrated driver support for the various interfaces, allowing you to focus on your app).


Figure 3. The Intel® IoT Gateway Platform

You can simply development further by using the Wind River* Intelligent Device Platform XT. Intelligent Device Platform XT is a customizable middleware development environment that provides, among other things, security and management technologies. Although these features are commonly developed as an afterthought, bringing security and manageability in at the start enables a world-class IoT gateway that protects your data and minimizes downtime.

Cloud

Think of the cloud as an essential part of the IoT ecosystem, given its attributes of scaling and elasticity. As data grows from edge devices, the ability to scale the storage and networking assets as well as compute resources becomes a key enabler for the development of IoT systems.

What makes elastic compute possible is a technology called virtualization. Using virtualization, you can carve up a processor to represent two or more virtual processors. Each virtual processor time-shares the physical processor such that when one needs less computing power, another virtual processor (and the software that occupies it) can exploit those physical resources.

Virtualization has been around for some time, but you can find extensions in modern processors that make the technology more efficient. As you’d expect, you can find these virtualization extensions in powerful Intel® Xeon® processors for data centers, but you can also find them in lower-power Intel® Atom™ processors.

Virtualization means that when more IoT data flows from edge devices, physical processors can be carved up and associated with these data flows. When the flow of data subsides, these resources can be idled or reassigned to other tasks to save power and cost.

Management and Monitoring

A complexity that the IoT creates is the monitoring and management of gateways and edge-devices. Considering that an IoT system could contain thousands of gateways with many millions of sensor and actuator endpoints, management and monitoring present new challenges.

Although it’s possible to build a custom cloud-based set of apps to meet this challenge, you must also consider time-to-market constraints. This is one of the reasons Wind River created the Wind River Helix* Device Cloud. Device Cloud is a cloud-based IoT platform that provides device management, end-to-end security, and telemetry and analytics. Device Cloud is a technology stack that operates from the edge device into the cloud and offers data capture, data analysis, and overall monitoring and management for IoT systems at their scale. Device Cloud is also fully integrated with Intel® IoT Gateway Technology, as well as a portfolio of operating systems, such as Wind River Linux and VxWorks*.

Analytics

The key behind IoT is data, and this is where you create value. IoT data can come in many forms, but it commonly has two attributes: its scale and its relationship with time.

Realization of the IoT was partially enabled by big data processing systems. These systems were designed for datasets that require nontraditional processing methods. The datasets that massive numbers of edge devices create in an IoT ecosystem are a perfect match. The other aspect of IoT data is that it tends to be time-series data. Its storage and analysis are better suited to big data processing systems and NoSQL databases than traditional approaches.

Apache Hadoop* (such as provided through Cloudera) remains the key big-data processing system that includes an ecosystem in itself of technologies to address a range of needs. Apache NiFi*, for example, is a dataflow system that permits flow-based programming through directed graphs (perfect for streams of time-series data). Apache Cassandra*, which differs from the batch-oriented Hadoop Distributed File System (HDFS), is a NoSQL database distributed across nodes and supports clusters spanning geographically distributed data centers. The Cassandra data model is also ideal for real-time processing of time-series data (using a hybrid key-value and column-oriented database). Figure 4 illustrates these components’ relationships.


Figure 4. Relationship of big data processing systems and their file systems

Analytics is an ideal match for the cloud. The ability to scale compute resources as a function of dataset size or processing speed requirements makes the cloud a perfect platform for analyzing IoT data with systems like NiFi. Elastically expanding the compute capabilities for processing a dataset, and then gracefully decreasing those resources when no longer needed minimizes infrastructure cost.

Enabling Technologies

The IoT ecosystem is enabled by a collection of other technologies that are important to understand. Let’s focus on some of the technologies for development and test and a few technologies that live inside devices within the IoT ecosystem:

  • The Wind River Helix App Cloud is a browser-based development environment for IoT apps. Using App Cloud, you can develop code, build on top of Wind River operating systems, and simplify app testing using devices such as the Edison board. Because it’s a browser-based development environment, you can attach to your environment from anywhere with everything you’d expect from a world-class integrated development environment.
  • Wind River Helix Lab Cloud, fully integrated with App Cloud, allows broad testing of apps over a range of virtualized devices. Using Lab Cloud, you can create a device configuration that represents a physical device, and then virtualize that device in the cloud. Using App Cloud, you can then load your code onto the device for validation. As a set of virtualized resources, you could create thousands of devices for testing, allowing you to find bugs more quickly. Lab Cloud helps you make reliable IoT apps at the edge device or gateway.
  • Wind River Rocket* is a best-in-class real-time-operating-system (RTOS) designed for the IoT, using hardware such as the Intel® Edison™ board. Rocket was designed to be scalable, running on as little as 4 KB of memory for power and memory constrained systems. Rocket provides all the services you’d expect from an RTOS, including multithreading, and is preintegrated with App Cloud, making it simple to build gateway or edge device apps in minimal time.
  • Wind River Pulsar* Linux is a Linux distribution for small, high-performance IoT systems that require security and manageability. Pulsar supports kernel reconfiguration so that you can tailor it to your needs and includes capabilities like virtualization to build complex IoT apps. You’ll also take advantage of continuous updates to ensure a reliable and secure platform. You can use Pulsar on a variety of hardware solutions, such as the MinnowBoard MAX* board, and Intel® Atom™ CPU.

Summary

The IoT ecosystem is created from a broad set of technologies but with a common thread of manageability and security. To build an end-to-end platform for the IoT, you need practical knowledge of many disciplines, but leveraging pre-validated and pre-integrated assets that work together makes this task not only simple but enjoyable.


Recipe: ROME1.0/SML for the Intel® Xeon Phi™ Processor 7250

$
0
0

Overview

This article provides a recipe for how to obtain, compile, and run ROME1.0 SML on Intel® Xeon® processors and Intel® Xeon Phi™ processors. Before you run SML, you need to run the MAP processing phase first, because SML will use the output of MAP. So this document also describes how to run MAP as well as SML. Please follow the instructions below to run the MAP and SML workloads.

The source and test workloads for this version of ROME can be downloaded from: http://ipccsb.dfci.harvard.edu/rome/download.html.

Introduction

ROME (Refinement and Optimization via Machine lEarning for cryo-EM) is one of the major research software packages from the Dana-Farber Cancer Institute. ROME is a parallel computing software system dedicated to high-resolution cryo-EM structure determination and data analysis, implementing advanced machine learning approaches optimized for HPC clusters. ROME 1.0 introduces SML (statistical manifold learning)-based deep classification, following MAP-based (maximum a posteriori) image alignment. More information about ROME can found at http://ipccsb.dfci.harvard.edu/rome/index.html.

The ROME system has be optimized for both Intel® Xeon® processors and Intel® Xeon Phi™ processors. Detailed information about the underlying algorithms and optimizations can be found at http://arxiv.org/abs/1604.04539.

In this document, we used three workloads: Inflammasome, RP-a and RP-b. The workload descriptions are as follows:

  • Inflammasome data: 16306 images of NLRC4/NAIP2 inflammasome with a size of 2502 pixels
  • RP-a: 57001 images of proteasome regulatory particles (RP) with a size of 1602 pixels
  • RP-b: 35407 images of proteasome regulatory particles (RP) with a size of 1602 pixels

In these documents, we use “ring11_all” to refer to the Inflammasome workload, “data6” to refer to the RP-a workload, and “data8” to refer to the RP-b workload.

Preliminaries

  1. To match these results, the Intel Xeon Phi processor machine needs to be booted with BIOS settings for quad cluster mode and MCDRAM cache mode. Please review this document for further information. The Intel Xeon processor system does not need to be started in any special manner.
  2. To build this package, install the Intel® MPI Library for Linux* 5.1(Update 3) and Intel® Parallel Studio XE Composer Edition for C++ Linux* Version 2016 (Update 3) or higher products on your systems.
  3. Download the source ROME1.0a.tar.gz from http://ipccsb.dfci.harvard.edu/rome/download.html
  4. Unpack the source code to /home/users.

    > cp ROME1.0a.tar.gz /home/users
    > tar –xzvf ROME1.0a.tar.gz

     
  5. The workloads are provided by the Intel® Parallel Computing Center for Structural Biology (http://ipccsb.dfci.harvard.edu/). As noted above, the workloads can be downloaded from http://ipccsb.dfci.harvard.edu/rome/download.html. Following the EMPIAR-10069 link, download Inf_data1.* (Set 1) and rename them ring11_all.*. Download RP_data2.* (Set 2) and rename them data8.*. Download RP_data4.* (Set 4) and rename them data6.*. The scripts referred to below can be obtained by pulling the file KNL_LAUNCH.tgz from http://ipccsb.dfci.harvard.edu/rome/download.html
  6. Copy the workloads and run scripts to your home directory. You should have the following files:

    >cp ring11_all.star /home/users
    >cp ring11_all.mrcs /home/users
    >cp data6.star /home/users
    >cp data6.mrcs /home/users
    >cp data8.star /home/users
    >cp data8.mrcs /home/users
    >cp run_ring11_all_map_XEON.sh /home/users
    >cp run_ring11_all_sml_XEON.sh /home/users
    >cp run_ring11_all_map_XEONPHI.sh /home/users
    >cp run_ring11_all_sml_XEONPHI.sh /home/users
    >cp run_data6_map_XEON.sh /home/users
    >cp run_data6_sml_XEON.sh /home/users
    >cp run_data6_map_XEONPHI.sh /home/users
    >cp run_data6_sml_XEONPHI.sh /home/users
    >cp run_data8_map_XEON.sh /home/users
    >cp run_data8_sml_XEON.sh /home/users
    >cp run_data8_map_XEONPHI.sh /home/users
    >cp run_data8_sml_XEONPHI.sh /home/users

Prepare the binaries for the Intel Xeon processor and the Xeon Phi processor

  1. Set up the Intel® MPI Library and Intel® C++ Compiler environments:

    > source /opt/intel/impi/<version>/bin64/mpivars.sh
    > source /opt/intel/composer_xe_<version>/bin/compilervars.sh intel64
    > source /opt/intel/mkl/<version>/bin/mklvars.sh intel64

     
  2. Set environment variables for compilation of ROME:

    >export ROME_CC=mpiicpc
     
  3. Build the binaries for the Intel Xeon processor.

    >cd /home/users/ROME1.0a
    >make
    >mkdir bin
    >mv rome_map bin/rome_map
    >mv rome_sml bin/rome_sml

     
  4. Build the binaries for the Intel Xeon Phi processor.

    >cd /home/users/ROME1.0a
    >vi makefile
    Modify FLAGS to below:
    FLAGS := -mkl -fopenmp -O3 -xMIC-AVX512 -DNDEBUG -std=c++11
    >make
    >mkdir bin_knl
    >mv rome_map bin_knl/rome_map
    >mv rome_sml bin_knl/rome_sml

Run the test workloads on the Intel Xeon processor (an Intel® Xeon® processor E5-2697 v4 is assumed by the scripts)

  1. Running the ROME MAP phase for these workloads:

    Running workload1: ring11_all
    >cd /home/users/
    >sh run_ring11_all_map_XEON.sh


    Running workload2: data6
    >cd /home/users/
    >sh run_data6_map_XEON.sh


    Running workload3: data8
    >cd /home/users/
    >sh run_data8_map_XEON.sh

     
  2. Running the ROME SML phase for these workloads:

    Running workload1: ring11_all
    >cd /home/users/
    >sh run_ring11_all_sml_XEON.sh


    Running workload2: data6
    >cd /home/users/
    >sh run_data6_sml_XEON.sh


    Running workload3: data8
    >cd /home/users/
    >sh run_data8_sml_XEON.sh

Run the test workloads on the Intel Xeon Phi processor

  1. Running the ROME MAP phase for these workloads:

    >cd /home/users/
    Running workload1: ring11_all
    >cd /home/users/
    >sh run_ring11_all_map_XEONPHI.sh


    Running workload2: data6
    >cd /home/users/
    >sh run_data6_map_XEONPHI.sh


    Running workload3: data8
    >cd /home/users/
    >sh run_data8_map_XEONPHI.sh

     
  2. Running ROME SML phase for these workloads:

    Running workload1: ring11_all
    >cd /home/users/
    >sh run_ring11_all_sml_XEONPHI.sh


    Running workload2: data6
    >cd /home/users/
    >sh run_data6_sml_XEONPHI.sh


    Running workload3: data8
    >cd /home/users/
    >sh run_data8_sml_XEONPHI.sh

Performance gain seen with ROME SML

For the workloads we described above, the following graph shows the speedups achieved from running this code on the Intel Xeon Phi processor. As you can see, up to a 2.37x speedup for the ring11_all workload can be achieved when running this code on one Intel® Xeon Phi™ processor 7250 versus one two-socket Intel Xeon processor E5-2697 v4. The data used below were stored on a Lustre* file system.

Speedups achieved from running this code on the Intel Xeon Phi processor

Testing platform configuration:

Intel Xeon processor E5-2697 v4: BDW-EP node with dual sockets, 18 cores/socket HT enabled @2.3 GHz 145W (Intel Xeon processor E5-2697 v4 w/128 GB RAM), Red Hat Enterprise Linux Server release 6.7 (Santiago)

Intel Xeon Phi processor 7250 (68 cores): Intel Xeon Phi processor 7250 68 core, 272 threads, 1400 MHz core freq. MCDRAM 16 GB 7.2 GT/s, DDR4 96 GB 2400 MHz, Red Hat Enterprise Linux Server release 6.7 (Santiago), quad cluster mode, MCDRAM cache mode.

Heterogeneous Computing Implementation via OpenCL™

$
0
0

1. Abstract

OpenCL™ is the open standard to programming across multiple computing devices, such as CPU, GPU, and FPGA, and is an ideal programming language for heterogeneous computing implementation. This article is a step-by-step guide on the methodology of dispatching a workload to all OpenCL devices in the platform with the same kernel to jointly achieve a computing task. Although the article focuses on only the Intel processor, Intel® HD Graphics, Iris™ graphics, and Iris™ Pro graphics, theoretically, it works on all OpenCL-complied computing devices. Readers are assumed to have a basic understanding on OpenCL programming. The OpenCL framework, platform model, execution model, and memory model [1] are not discussed here.

2. Concept of Heterogeneous Computing Implementation

In an OpenCL platform, the host contains one or more compute devices. Each device has one or more computing units, and each compute unit has one or more processing elements that can execute the kernel code (Figure 1).


Figure 1: OpenCL™ platform model [2].

From the software implementation perspective, one normally starts OpenCL program from querying the platform. A list of devices can then be retrieved and the programmer can choose the device from those devices. The next step is creating a context. The chosen device is associated with the context and the command queue is created for the device.

Since one context can be associated with multiple devices, the idea is to associate both CPU and GPU to the context and create the command queue for each targeted device (Figure 2).


Figure 2:Topology of multiple devices from a programming perspective.

The workload is enqueued to the context (either in buffer or image object form). It thus is accessible to all devices associated to the context. The host program can distribute different amount of workload to those devices.

Assuming XX% of workload is offloaded to the CPU and YY% of the workload is offloaded to GPU, the value of XX% and YY% can be arbitrarily chosen as long as XX% + YY% = 100% (Figure 3).


Figure 3: Workload dispatch of the sample implementation.

3. Result

In a sample Lattice-Boltzman Method (LBM) OpenCL heterogeneous computing implementation with 100 by 100 by 130 floating point workload, a normalized performance statistic using a different XX% (the percentage of workload to CPU) and YY% (the percentage of workload to GPU) combination is illustrated in Figure 4. The performance was evaluated on a 5th generation Intel® Core™ i7 processor with Iris™ Pro graphics. Note that although the combination (XX, YY) = (50, 50) has the maximum performance gain (around 30%,) it is not the general case. Different kernels might fit better in either the CPU or GPU. The best (XX, YY) combination must be evaluated case by case.


Figure 4: Normalized (XX, YY) combination performance statistics.

4. Implementation Detail

To be more illustrative, the following discussion assumes that the workload is a 100 by 100 by 130 floating point 3D array and the OpenCL devices are an Intel processor and Intel HD Graphics (or Iris graphics or Iris Pro graphics). Since the implementation involves only a host-side program, the OpenCL kernel implementation and optimization are not discussed here. The pseudocode in this section ignores the error checking. Readers are encouraged to add error-checking code themselves when adapting it.

4.1 Workload

The workload assumes a 100 × 100 x 130 floating point three-dimensional (3D) array, declared in the following form:

const int iGridSize = 100 * 100 * 130;

float srcGrid [iGridSize], dstGrid [iGridSize];   // srcGrid and dstGrid represent the source and
 					        //the destination of the workload respectively

Although the workload assumes a 3D floating point array, the memory is declared as a one-dimensional array so that the data can be easily fitted into a cl_mem object, which is easier for data manipulation.

4.2 Data structures to represent the OpenCL platform

To implement the concept in Figure 2 programmatically, the OpenCL data structure must be designed with at least a cl_platform, a cl_context, and a cl_program object. In order to feed to the OpenCL API call, the cl_device_id, cl_command_queue, and cl_kernel objects are declared in pointer form. They could be instantiated via dynamic memory allocation according to the number of computing device used.

typedef struct {
	cl_platform_id clPlatform;		// OpenCL platform ID
	cl_context clContext;			// OpenCL Context
	cl_program clProgram;			// OpenCL kernel program source object
 	cl_int clNumDevices;			// The number of OpenCL devices to use
	cl_device_id* clDevices;			// OpenCL device IDs
 	cl_device_type* clDeviceTypes;		// OpenCL device types info. CPU, GPU, or
						// ACCELERATOR
	cl_command_queue* clCommandQueues;	// Command queues for the OpenCL
 							// devices
	cl_kernel* clKernels;			// OpenCL kernel objects
} OpenCL_Param;

OpenCL_Param prm;

4.3 Constructing the OpenCL devices

The implementation discussed here considers the case with a single machine with two devices (CPU and GPU) so that readers can easily understand the methodology.

4.3.1 Detecting OpenCL devices

Detecting the device is the first step of OpenCL programming. The devices can be retrieved through the follow code snippet.

clGetPlatformIDs ( 1, &(prm.clPlatform), NULL );
 	// Get the OpenCL platform ID and store it in prm.clPlatform.

clGetDeviceIDs ( prm.clPlatform, CL_DEVICE_TYPE_ALL, 0, NULL, &(prm.clNumDevices) );
prm.clDevices = (cl_device_id*)malloc ( sizeof(cl_device_id) * prm.clNumDevices );
 	// Query how many OpenCL devices are available in the platform; the number of
 	// device is stored in prm.clNumDevices. Proper amount of memory is then
 	// allocated for prm.clDevices according to prm.clNumDevices.

clGetDeviceIDs (prm.clPlatform, CL_DEVICE_TYPE_ALL, prm.clNumDevices, prm.clDevices,  \
 		NULL);
 	// Query the OpenCL device IDs and store it in prm.clDevices.

In heterogeneous computing usage, it is important to know which device is which in order to distribute the correct amount of workload to the designated computing device. ClGetDeviceInfo() can be used to query the device type information.

cl_device_type DeviceType;

prm.clDeviceTypes = (cl_device_type*) malloc ( sizeof(cl_device_type) *  \
 						prm.clNumDevices );
 	// Allocate proper amount of memory for prm.clDeviceTypes.

for (int i = 0; i < prm.clNumDevices; i++) {

	clGetDeviceInfo ( prm.clDevices[i], CL_DEVICE_TYPE, \
 			sizeof(cl_device_type), &DeviceType, NULL );
  			// Query the device type of each OpenCL device and store it in
 			// prm.clDeviceType one by one.
	prm.clDeviceTypes[i] = DeviceType;
}

4.3.2 Preparing the OpenCL context

Once the OpenCL devices are located, the next step is to prepare the OpenCL context, which facilitates those devices. It is a straightforward step, as it is the same as any other OpenCL programming on creating the context.

cl_context_properties clCPs[3] = { CL_CONTEXT_PLATFORM, prm.clPlatform, 0 };

prm.clContext = clCreateContext ( clCPs, 2, prm.clDevices, NULL, NULL, NULL );

4.3.3 Create command queues

The command queue is the tunnel to load kernels, kernel parameters, and workload to the OpenCL device. One command queue is created for one OpenCL device; in this example, two command queues are created for CPU and GPU respectively.

prm.clCommandQueues = (cl_command_queue*)malloc ( prm.clNumDevices *  \
 							sizeof(cl_command_queue) );
 	// Allocate proper amount of memory for prm.clCommandQueues.

for (int i = 0; i < prm.clNumDevices; i++) {

prm.clCommandQueues[i] = clCreateCommandQueue ( prm.clContext, \
 				prm.clDevices[i], CL_QUEUE_PROFILING_ENABLE, NULL);
 		// Create command queue for each of the OpenCL device
}

4.4 Compiling OpenCL kernels

The topology indicated in Figure 2 is implemented so far. The kernel source file then should be loaded and built for the OpenCL devices to execute. Note that there are two OpenCL devices in the platform. The two device IDs must be fed to the clBuildProgram() call so that the compiler can build the proper binary code for each device. The following source code snippet assumes that the kernel source code is loaded into a buffer, clSource, via file I/O calls and is not detailed below.

char* clSource;

// Insert kernel source file read code here. Following code assumes clSource buffer is
// properly allocated and loaded with the kernel source.

prm.clProgram = clCreateProgramWithSource (prm.clContext, 1, clSource, NULL, NULL );
clBuildProgram (prm.clProgram, 2, prm.clDevices, NULL, NULL, NULL );
 	// Build the program executable for CPU and GPU via feeding clBuildProgram() with
 	// “2”, which illustrates there are 2 target devices and the device ID list.

prm.clKernels = (cl_kernel*)malloc ( prm.clNumDevices * sizeof(cl_kernel) );
for (int i = 0; i < prm.clNumDevices; i++) {
 	prm.clKernels[i] = clCreateKernel (prm.clProgram, “<the kernel name>”, NULL );
}

4.5 Distributing the workload

After the kernel has been built, the workload can then be distributed to the devices. The following code snippet demonstrates how to dispatch the designated workload to each OpenCL device. Note that the setting OpenCL kernel argument, clSetKernelArg(), call is not demonstrated here. Different kernel implementation need different arguments. The code to set up the kernel argument is less meaningful in the example here.

// Put kernel argument setting code, clSetKernelArg(), here. Note that, the same argument
// must be set to the both kernel objects.

size_t dimBlock[3] = { 100, 1, 1 };		// Work-group dimension and size
size_t dimGrid[2][3] = { {100, 100, 130}, {100, 100, 130} };	// Work-item dimension
 						// and size for each OpenCL device
dimGrid[0][0] = dimGrid[1][0] = (int)ceil ( dimGrid[0][0] / (double)dimBlock[0] ) *  \
 					dimBlock[0];
dimGrid[0][1] = dimGrid[1][1] = (int)ceil ( dimGrid[0][1] / (double)dimBlock[1] ) *  \
 					dimBlock[1];
 	// Make sure the work-item size is a factor of work-group size in each dimension
dimGrid[0][2] = (int)ceil ( round(dimGrid[0][2]* (double)<XX> /100.0) / (double)dimBlock[2] )
 			* dimBlock[2];				// Work-items for CPU
dimGrid[1][2] = (int)ceil ( round(dimGrid[1][2] * (double)<YY> /100.0) /
 			(double)dimBlock[2] ) * dimBlock[2];	// Work-items for GPU
 	// Assume <XX>% of workload for CPU and <YY>% of workload to GPU

Size_t dimOffset[3] = { 0, 0, dimGrid[0][2] };	// The offset of the whole workload. It is
						// the GPU workload starting point

for (int i = 0; i < 2; i++) {

	If ( CL_DEVICE_TYPE_CPU == prm.clDeviceTypes[i] )
		clEnqueueNDRangeKernel ( prm.clCommandQueues[i], prm.clKernels[i], \
					3, NULL, dimGrid[0], dimBlock, 0, NULL, NULL );
	else					// The other device is CL_DEVICE_TYPE_GPU
		clEnqueueNDRangeKernel ( prm.clCommandQueues[i], prm.clKernels[i], \
					3, dimOffset, dimGrid[1], dimBlock, 0, NULL, NULL );
		// Offload proper portion of workload to CPU and GPU respectively
}

5. Reference

[1] OpenCL 2.1 specification. https://www.khronos.org/registry/cl/

[2] Image courtesy of Khronos group.

Driving Software Simplicity with Wind River Helix* Chassis

$
0
0

Wind River Helix* Cockpit

Cockpit is an open source, Linux* runtime platform based on the Yocto Project*. Compatible with GENIVI* specifications, Cockpit is designed to allow you to develop and validate rich in-vehicle infotainment (IVI) apps, telecommunications and informatics, and automotive instrument cluster systems quickly. These systems are preintegrated with advanced connectivity and security features.

As a versatile platform for embedded software, Cockpit supports a variety of industry hardware and user-friendly human–machine interface tools. It provides a framework in which to develop complex IVI systems, significantly reducing development time. Cockpit also gives you access to Wind River Helix* App Cloud, a cloud-based software development environment that helps you develop IoT apps when multiple development centers in various locations are involved.

Open Standards-Based Foundation for the In-Vehicle Infotainment Market

Cockpit is ideal for building apps for remote vehicle tracking, automatic roadside assistance, integrated digital cockpit, web radio, or other automotive systems, such as instrument panels or center displays. It gives you a solid foundation on top of which you can add functionality, specific user interfaces (UIs), and other value-added options (Figure 1). The templates, tools, and methods help you create embedded products based on custom Linux-based systems regardless of the underlying hardware architecture. As a result, you have maximum flexibility to use your hardware of choice.

Figure 1. Wind River* Helix* Cockpit architecture

Key features of Cockpit include:

  • Connectivity framework: This framework consists of a UI and connectivity links for external (cloud) and in-vehicle communications. Optional connectivity components include iPod*, Apple* CarPlay*, MirrorLink*, Google AAP, Wi-Fi*, and Bluetooth*.
  • Firmware and software over-the-air (OTA) management: Cockpit supports OTA management to wirelessly manage and update both firmware and software throughout the product life cycle.
  • Flexible platform: Cockpit supports multiple hardware and board support packages (BSPs). This comprehensive, commercially supported development toolchain includes runtime observation, a debugger, memory profiling, a BitBake build system, and Wind River Workbench.
  • Long-term support: The secure Linux base allows future extensions and updates to keep up with the evolution in IoT technologies, protocols, and product offerings.
  • Built-in security: Security Profile for Wind River Linux along with secure boot, device authentication, and other runtime security checks guarantee secure data handling.

Figure 2. Wind River* Helix* Chassis layout diagram

Wind River Helix Drive

Drive is built on VxWorks* 653 3.0 Multi-core Edition, a widely deployed real-time operating system (RTOS) in industrial, defense, automotive, aerospace, and other safety- and security-critical applications. With VxWorks 653 Multi-core Edition, you get improved performance, scalability, and a future-proof operating environment for the IoT. Drive provides you with an International Organization for Standardization (ISO) 26262–certified platform for safety-critical automotive applications, from piloted and highly automated driving to ADAS. Let’s look at some of the key benefits of using Drive.

Safety

For ADAS and autonomous driving functions, the standards for safety are stringent. Drive allows you to develop and integrate multiple apps with different safety criticality into a single hardware platform. Separation and isolation of these applications are maintained in compliance with ISO 26262.

Drive supports the latest multi-core processors and provides robust partitioning that enables DO-178C certification. It also gives you the following benefits:

  • The multicore-enabled scheduler can support a variety of guest operating systems. Apps can run in parallel, so effective compute capacity is increased. Space and time partitioning for each core is also ensured.
  • The two-level virtual machine architecture significantly improves performance and lowers jitter.
  • The number of partitions can scale up to 255.
  • Drive can support multiple safety levels simultaneously.

Streamlined AUTomotive Open System ARchitecture Software Component Integration

Drive conforms to AUTomotive Open System ARchitecture (AUTOSAR) development methodologies for software modules. It supports standardized connectivity and functional interfaces to other automotive software components, enabling faster integration and interoperability.

Robust Security for the Connected Car

Safety- and security-critical automotive systems must prevent the injection and execution of malicious code into the system. Drive has multiple features in place to provide malware protection:

  • It allows only authenticated (signed) binaries to run.
  • Drive enforces secure boot (in conformance with International Electrotechnical Commission 15408) using Intel® Trusted Platform Module and ARM* TrustZone*.

The secure boot function verifies binaries at every stage of the boot process. If a component fails to pass signature verification, the boot process will stop.

Connectivity

Drive uses Data Distribution Services, a real-time, low latency middleware protocol and application programming interface (API) standard from the Object Management Group to provide data connectivity among safety- and security-critical applications. It uses the Socket Controller Area Network (SocketCAN) to provide a uniform interface for opening multiple sockets at the same time to listen for and send frames to CAN identifiers. With the Wind River Certified Network Stack (an embedded TCP/User Datagram Protocol/IP version 4 network stack with multicast), Drive supports a BSD socket API, enabling easy migration of networking software from VxWorks and Linux platforms.

Summary

Wind River Helix Chassis, with Cockpit and Drive, is designed to simplify your software development process and innovations for connected vehicles. Leveraging a time-tested RTOS and built-in security capabilities with standard-based and certified tools, templates, and methods, you can now develop IVI, telematics, and other apps faster and with guaranteed safety and security compliance. An open source architecture allows for greater flexibility, and you can build apps compatible with multiple hardware platforms. The result is the ability to innovate and implement value-added functionality for safer, more efficient connected cars and a better driving experience.

Intel® XDK FAQs - Cordova

$
0
0

How do I set app orientation?

You set the orientation under the Build Settings section of the Projects tab.

To control the orientation of an iPad you may need to create a simply plugin that contains a single plugin.xml file like the following:

<config-file target="*-Info.plist" parent="UISupportedInterfaceOrientations~ipad" overwrite="true"><string></string></config-file><config-file target="*-Info.plist" parent="UISupportedInterfaceOrientations~ipad" overwrite="true"><array><string>UIInterfaceOrientationPortrait</string></array></config-file> 

Then add the plugin as a local plugin using the plugin manager on the Projects tab.

HINT: to import the plugin.xml file you created above, you must select the folder that contains the plugin.xml file; you cannot select the plugin.xml file itself, using the import dialg, because a typical plugin consists of many files, not a single plugin.xml. The plugin you created based on the instructions above only requires a single file, it is an atypical plugin.

Alternatively, you can use this plugin: https://github.com/yoik/cordova-yoik-screenorientation. Import it as a third-party Cordova* plugin using the plugin manager with the following information:

  • cordova-plugin-screen-orientation
  • specify a version (e.g. 1.4.0) or leave blank for the "latest" version

Or, you can reference it directly from its GitHub repo:

To use the screen orientation plugin referenced above you must add some JavaScript code to your app to manipulate the additional JavaScript API that is provided by this plugin. Simply adding the plugin will not automatically fix your orientation, you must add some code to your app that takes care of this. See the plugin's GitHub repo for details on how to use that API.

Is it possible to create a background service using Intel XDK?

Background services require the use of specialized Cordova* plugins that need to be created specifically for your needs. Intel XDK does not support development or debug of plugins, only the use of them as "black boxes" with your HTML5 app. Background services can be accomplished using Java on Android or Objective C on iOS. If a plugin that backgrounds the functions required already exists (for example, this plugin for background geo tracking), Intel XDK's build system will work with it.

How do I send an email from my App?

You can use the Cordova* email plugin or use web intent - PhoneGap* and Cordova* 3.X.

How do you create an offline application?

You can use the technique described here by creating an offline.appcache file and then setting it up to store the files that are needed to run the program offline. Note that offline applications need to be built using the Cordova* or Legacy Hybrid build options.

How do I work with alarms and timed notifications?

Unfortunately, alarms and notifications are advanced subjects that require a background service. This cannot be implemented in HTML5 and can only be done in native code by using a plugin. Background services require the use of specialized Cordova* plugins that need to be created specifically for your needs. Intel XDK does not support the development or debug of plugins, only the use of them as "black boxes" with your HTML5 app. Background services can be accomplished using Java on Android or Objective C on iOS. If a plugin that backgrounds the functions required already exists (for example, this plugin for background geo tracking) the Intel XDK's build system will work with it.

How do I get a reliable device ID?

You can use the Phonegap/Cordova* Unique Device ID (UUID) plugin for Android*, iOS* and Windows* Phone 8.

How do I implement In-App purchasing in my app?

There is a Cordova* plugin for this. A tutorial on its implementation can be found here. There is also a sample in Intel XDK called 'In App Purchase' which can be downloaded here.

How do I install custom fonts on devices?

Fonts can be considered as an asset that is included with your app, not shared among other apps on the device just like images and CSS files that are private to the app and not shared. It is possible to share some files between apps using, for example, the SD card space on an Android* device. If you include the font files as assets in your application then there is no download time to consider. They are part of your app and already exist on the device after installation.

How do I access the device's file storage?

You can use HTML5 local storage and this is a good article to get started with. Alternatively, there is aCordova* file plugin for that.

Why isn't AppMobi* push notification services working?

This seems to be an issue on AppMobi's end and can only be addressed by them. PushMobi is only available in the "legacy" container. AppMobi* has not developed a Cordova* plugin, so it cannot be used in the Cordova* build containers. Thus, it is not available with the default build system. We recommend that you consider using the Cordova* push notification plugin instead.

How do I configure an app to run as a service when it is closed?

If you want a service to run in the background you'll have to write a service, either by creating a custom plugin or writing a separate service using standard Android* development tools. The Cordova* system does not facilitate writing services.

How do I dynamically play videos in my app?

  1. Download the Javascript and CSS files from https://github.com/videojs and include them in your project file.
  2. Add references to them into your index.html file.
  3. Add a panel 'main1' that will be playing the video. This panel will be launched when the user clicks on the video in the main panel.

     
    <div class="panel" id="main1" data-appbuilder-object="panel" style=""><video id="example_video_1" class="video-js vjs-default-skin" controls="controls" preload="auto" width="200" poster="camera.png" data-setup="{}"><source src="JAIL.mp4" type="video/mp4"><p class="vjs-no-js">To view this video please enable JavaScript*, and consider upgrading to a web browser that <a href=http://videojs.com/html5-video-support/ target="_blank">supports HTML5 video</a></p></video><a onclick="runVid3()" href="#" class="button" data-appbuilder-object="button">Back</a></div>
  4. When the user clicks on the video, the click event sets the 'src' attribute of the video element to what the user wants to watch.

     
    Function runVid2(){
          Document.getElementsByTagName("video")[0].setAttribute("src","appdes.mp4");
          $.ui.loadContent("#main1",true,false,"pop");
    }
  5. The 'main1' panel opens waiting for the user to click the play button.

NOTE: The video does not play in the emulator and so you will have to test using a real device. The user also has to stop the video using the video controls. Clicking on the back button results in the video playing in the background.

How do I design my Cordova* built Android* app for tablets?

This page lists a set of guidelines to follow to make your app of tablet quality. If your app fulfills the criteria for tablet app quality, it can be featured in Google* Play's "Designed for tablets" section.

How do I resolve icon related issues with Cordova* CLI build system?

Ensure icon sizes are properly specified in the intelxdk.config.additions.xml. For example, if you are targeting iOS 6, you need to manually specify the icons sizes that iOS* 6 uses.

<icon platform="ios" src="images/ios/72x72.icon.png" width="72" height="72" /><icon platform="ios" src="images/ios/57x57.icon.png" width="57" height="57" />

These are not required in the build system and so you will have to include them in the additions file.

For more information on adding build options using intelxdk.config.additions.xml, visit: /en-us/html5/articles/adding-special-build-options-to-your-xdk-cordova-app-with-the-intelxdk-config-additions-xml-file

Is there a plugin I can use in my App to share content on social media?

Yes, you can use the PhoneGap Social Sharing plugin for Android*, iOS* and Windows* Phone.

Iframe does not load in my app. Is there an alternative?

Yes, you can use the inAppBrowser plugin instead.

Why are intel.xdk.istablet and intel.xdk.isphone not working?

Those properties are quite old and is based on the legacy AppMobi* system. An alternative is to detect the viewport size instead. You can get the user's screen size using screen.width and screen.height properties (refer to this article for more information) and control the actual view of the webview by using the viewport meta tag (this page has several examples). You can also look through this forum thread for a detailed discussion on the same.

How do I enable security in my app?

We recommend using the App Security API. App Security API is a collection of JavaScript API for Hybrid HTML5 application developers. It enables developers, even those who are not security experts, to take advantage of the security properties and capabilities supported by the platform. The API collection is available to developers in the form of a Cordova plugin (JavaScript API and middleware), supported on the following operating systems: Windows, Android & iOS.
For more details please visit: https://software.intel.com/en-us/app-security-api.

For enabling it, please select the App Security plugin on the plugins list of the Project tab and build your app as a Cordova Hybrid app. After adding the plugin, you can start using it simply by calling its API. For more details about how to get started with the App Security API plugin, please see the relevant sample app articles at: https://software.intel.com/en-us/xdk/article/my-private-photos-sample and https://software.intel.com/en-us/xdk/article/my-private-notes-sample.

Why does my build fail with Admob plugins? Is there an alternative?

Intel XDK does not support the library project that has been newly introduced in the com.google.playservices@21.0.0 plugin. Admob plugins are dependent on "com.google.playservices", which adds Google* play services jar to project. The "com.google.playservices@19.0.0" is a simple jar file that works quite well but the "com.google.playservices@21.0.0" is using a new feature to include a whole library project. It works if built locally with Cordova CLI, but fails when using Intel XDK.

To keep compatible with Intel XDK, the dependency of admob plugin should be changed to "com.google.playservices@19.0.0".

Why does the intel.xdk.camera plugin fail? Is there an alternative?

There seem to be some general issues with the camera plugin on iOS*. An alternative is to use the Cordova camera plugin, instead and change the version to 0.3.3.

How do I resolve Geolocation issues with Cordova?

Give this app a try, it contains lots of useful comments and console log messages. However, use Cordova 0.3.10 version of the geo plugin instead of the Intel XDK geo plugin. Intel XDK buttons on the sample app will not work in a built app because the Intel XDK geo plugin is not included. However, they will partially work in the Emulator and Debug. If you test it on a real device, without the Intel XDK geo plugin selected, you should be able to see what is working and what is not on your device. There is a problem with the Intel XDK geo plugin. It cannot be used in the same build with the Cordova geo plugin. Do not use the Intel XDK geo plugin as it will be discontinued.

Geo fine might not work because of the following reasons:

  1. Your device does not have a GPS chip
  2. It is taking a long time to get a GPS lock (if you are indoors)
  3. The GPS on your device has been disabled in the settings

Geo coarse is the safest bet to quickly get an initial reading. It will get a reading based on a variety of inputs, but is usually not as accurate as geo fine but generally accurate enough to know what town you are located in and your approximate location in that town. Geo coarse will also prime the geo cache so there is something to read when you try to get a geo fine reading. Ensure your code can handle situations where you might not be getting any geo data as there is no guarantee you'll be able to get a geo fine reading at all or in a reasonable period of time. Success with geo fine is highly dependent on a lot of parameters that are typically outside of your control.

Is there an equivalent Cordova* plugin for intel.xdk.player.playPodcast? If so, how can I use it?

Yes, there is and you can find the one that best fits the bill from the Cordova* plugin registry.

To make this work you will need to do the following:

  • Detect your platform (you can use uaparser.js or you can do it yourself by inspecting the user agent string)
  • Include the plugin only on the Android* platform and use <video> on iOS*.
  • Create conditional code to do what is appropriate for the platform detected

You can force a plugin to be part of an Android* build by adding it manually into the additions file. To see what the basic directives are to include a plugin manually:

  1. Include it using the "import plugin" dialog, perform a build and inspect the resulting intelxdk.config.android.xml file.
  2. Then remove it from your Project tab settings, copy the directive from that config file and paste it into the intelxdk.config.additions.xml file. Prefix that directive with <!-- +Android* -->.

More information is available here and this is what an additions file can look like:

<preference name="debuggable" value="true" /><preference name="StatusBarOverlaysWebView" value="false" /><preference name="StatusBarBackgroundColor" value="#000000" /><preference name="StatusBarStyle" value="lightcontent" /><!-- -iOS* --><intelxdk:plugin intelxdk:value="nl.nielsad.cordova.wifiscanner" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="nl.nielsad.cordova.wifiscanner" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="org.apache.cordova.statusbar" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="https://github.com/EddyVerbruggen/Flashlight-PhoneGap-Plugin" />

This sample forces a plugin included with the "import plugin" dialog to be excluded from the platforms shown. You can include it only in the Android* platform by using conditional code and one or more appropriate plugins.

How do I display a webpage in my app without leaving my app?

The most effective way to do so is by using inAppBrowser.

Does Cordova* media have callbacks in the emulator?

While Cordova* media objects have proper callbacks when using the debug tab on a device, the emulator doesn't report state changes back to the Media object. This functionality has not been implemented yet. Under emulation, the Media object is implemented by creating an <audio> tag in the program under test. The <audio> tag emits a bunch of events, and these could be captured and turned into status callbacks on the Media object.

Why does the Cordova version number not match the Projects tab's Build Settings CLI version number, the Emulate tab, App Preview and my built app?

This is due to the difficulty in keeping different components in sync and is compounded by the version numbering convention that the Cordova project uses to distinguish build tool versions (the CLI version) from platform versions (the Cordova target-specific framework version) and plugin versions.

The CLI version you specify in the Projects tab's Build Settings section is the "Cordova CLI" version that the build system uses to build your app. Each version of the Cordova CLI tools come with a set of "pinned" Cordova platform framework versions, which are tied to the target platform.

NOTE: the specific Cordova platform framework versions shown below are subject to change without notice.

Our Cordova CLI 4.1.2 build system was "pinned" to: 

  • cordova-android@3.6.4 (Android Cordova platform version 3.6.4)
  • cordova-ios@3.7.0 (iOS Cordova platform version 3.7.0)
  • cordova-windows@3.7.0 (Cordova Windows platform version 3.7.0)

Our Cordova CLI 5.1.1 build system is "pinned" to:

  • cordova-android@4.1.1 (as of March 23, 2016)
  • cordova-ios@3.8.0
  • cordova-windows@4.0.0

Our Cordova CLI 5.4.1 build system is "pinned" to: 

  • cordova-android@5.0.0
  • cordova-ios@4.0.1
  • cordova-windows@4.3.1

Our Cordova CLI 6.2.0 build system is "pinned" to: 

  • cordova-android@5.1.1
  • cordova-ios@4.1.1
  • cordova-windows@4.3.2

Our CLI 6.2.0 build system is nearly identical to a standard Cordova CLI 6.2.0 installation. A standard 6.2.0 installation differs slightly from our build system because it specifies the cordova-io@4.1.0 and cordova-windows@4.3.1 platform versions There are no differences in the cordova-android platform versions. 

Our CLI 5.4.1 build system really should be called "CLI 5.4.1+" because the platform versions it uses are closer to the "pinned" versions in the Cordova CLI 6.0.0 release than those "pinned" in the original CLI 5.4.1 release.

Our CLI 5.1.1 build system has been deprecated, as of August 2, 2016 and will be retired with an upcoming fall, 2016 release of the Intel XDK. It is highly recommended that you upgrade your apps to build with Cordova CLI 6.2.0, ASAP.

The Cordova platform framework version you get when you build an app does not equal the CLI version number in the Build Settings section of the Projects tab; it equals the Cordova platform framework version that is "pinned" to our build system's CLI version (see the list of pinned versions, above).

Technically, the target-specific Cordova platform frameworks can be updated [independently] for a given version of CLI tools. In some cases, our build system may use a Cordova platform version that is later than the version that was "pinned" to that version of the CLI when it was originally released by the Cordova project (that is, the Cordova platform versions originally specified by the Cordova CLI x.y.z links above).

You may see Cordova platform version differences in the Simulate tab, App Preview and your built app due to:

  • The Simulate tab uses one specific Cordova framework version. We try to make sure that the version of the Cordova platform it uses closely matches the current default Intel XDK version of Cordova CLI.

  • App Preview is released independently of the Intel XDK and, therefore, may use a different platform version than what you will see reported by the Simulate tab or your built app. Again, we try to release App Preview so it matches the version of the Cordova framework that is considered to be the default version for the Intel XDK at the time App Preview is released; but since the various tools are not always released in perfect sync, that is not always possible.

  • Your app is built with a "pinned" Cordova platform version, which is determined by the Cordova CLI version you specified in the Projects tab's Build Settings section. There are always at least two different CLI versions available in the Intel XDK build system.

  • For those versions of Crosswalk that were built with the Intel XDK CLI 4.1.2 build system, the cordova-android framework version was determined by the Crosswalk project, not by the Intel XDK build system.

  • When building an Android-Crosswalk app with Intel XDK CLI 5.1.1 and later, the cordova-android framework version equals the "pinned" cordova-android platform version for that CLI version (see lists above).

Do these Cordova platform framework version numbers matter? Occasionally, yes, but normally, not that much. There are some issues that come up that are related to the Cordova platform version, but they tend to be rare. The majority of the bugs and compatibility issues you will experience in your app have more to do with the versions and mix of Cordova plugins you choose to use and the HTML5 webview runtime on your test devices. See When is an HTML5 Web App a WebView App? for more details about what a webview is and how the webview affects your app.

The "default version" of CLI that the Intel XDK build system uses is rarely the most recent version of the Cordova CLI tools distributed by the Cordova project. There is always a lag between Cordova project releases and our ability to incorporate those releases into our build system and other Intel XDK components. In addition, we are not able to provide every CLI release that is made available by the Cordova project.

How do I add a third party plugin?

Please follow the instructions on this doc page to add a third-party plugin: Adding Plugins to Your Intel® XDK Cordova* App -- this plugin is not being included as part of your app. You will see it in the build log if it was successfully added to your build.

How do I make an AJAX call that works in my browser work in my app?

Please follow the instructions in this article: Cordova CLI 4.1.2 Domain Whitelisting with Intel XDK for AJAX and Launching External Apps.

I get an "intel is not defined" error, but my app works in Test tab, App Preview and Debug tab. What's wrong?

When your app runs in the Test tab, App Preview or the Debug tab the intel.xdk and core Cordova functions are automatically included for easy debug. That is, the plugins required to implement those APIs on a real device are already included in the corresponding debug modules.

When you build your app you must include the plugins that correspond to the APIs you are using in your build settings. This means you must enable the Cordova and/or XDK plugins that correspond to the APIs you are using. Go to the Projects tab and insure that the plugins you need are selected in your project's plugin settings. See Adding Plugins to Your Intel® XDK Cordova* App for additional details.

How do I target my app for use only on an iPad or only on an iPhone?

There is an undocumented feature in Cordova that should help you (the Cordova project provided this feature but failed to document it for the rest of the world). If you use the appropriate preference in theintelxdk.config.additions.xml file you should get what you need:

<preference name="target-device" value="tablet" />     <!-- Installs on iPad, not on iPhone --><preference name="target-device" value="handset" />    <!-- Installs on iPhone, iPad installs in a zoomed view and doesn't fill the entire screen --><preference name="target-device" value="universal" />  <!-- Installs on iPhone and iPad correctly -->

If you need info regarding the additions.xml file, see the blank template or this doc file: Adding Intel® XDK Cordova Build Options Using the Additions File.

Why does my build fail when I try to use the Cordova* Capture Plugin?

The Cordova* Capture plugin has a dependency on the File Plugin. Please make sure you both plugins selected on the projects tab.

How can I pinch and zoom in my Cordova* app?

For now, using the viewport meta tag is the only option to enable pinch and zoom. However, its behavior is unpredictable in different webviews. Testing a few samples apps has led us to believe that this feature is better on Crosswalk for Android. You can test this by building the Hello Cordova sample app for Android and Crosswalk for Android. Pinch and zoom will work on the latter only though they both have:

<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=yes, minimum-scale=1, maximum-scale=2">.

Please visit the following pages to get a better understanding of when to build with Crosswalk for Android:

http://blogs.intel.com/evangelists/2014/09/02/html5-web-app-webview-app/

https://software.intel.com/en-us/xdk/docs/why-use-crosswalk-for-android-builds

Another device oriented approach is to enable it by turning on Android accessibility gestures.

How do I make my Android application use the fullscreen so that the status and navigation bars disappear?

The Cordova* fullscreen plugin can be used to do this. For example, in your initialization code, include this function AndroidFullScreen.immersiveMode(null, null);.

You can get this third-party plugin from here https://github.com/mesmotronic/cordova-fullscreen-plugin

How do I add XXHDPI and XXXHDPI icons to my Android or Crosswalk application?

The Cordova CLI 4.1.2 build system will support this feature, but our 4.1.2 build system (and the 2170 version of the Intel XDK) does not handle the XX and XXX sizes directly. Use this workaround until these sizes are supported directly:

  • copy your XX and XXX icons into your source directory (usually named www)
  • add the following lines to your intelxdk.config.additions.xml file
  • see this Cordova doc page for some more details

Assuming your icons and splash screen images are stored in the "pkg" directory inside your source directory (your source directory is usually named www), add lines similar to these into yourintelxdk.config.additions.xml file (the precise name of your png files may be different than what is shown here):

<!-- for adding xxhdpi and xxxhdpi icons on Android --><icon platform="android" src="pkg/xxhdpi.png" density="xxhdpi" /><icon platform="android" src="pkg/xxxhdpi.png" density="xxxhdpi" /><splash platform="android" src="pkg/splash-port-xhdpi.png" density="port-xhdpi"/><splash platform="android" src="pkg/splash-land-xhdpi.png" density="land-xhdpi"/>

The precise names of your PNG files are not important, but the "density" designations are very important and, of course, the respective resolutions of your PNG files must be consistent with Android requirements. Those density parameters specify the respective "res-drawable-*dpi" directories that will be created in your APK for use by the Android system. NOTE: splash screen references have been added for reference, you do not need to use this technique for splash screens.

You can continue to insert the other icons into your app using the Intel XDK Projects tab.

Which plugin is the best to use with my app?

We are not able to track all the plugins out there, so we generally cannot give you a "this is better than that" evaluation of plugins. Check the Cordova plugin registry to see which plugins are most popular and check Stack Overflow to see which are best supported; also, check the individual plugin repos to see how well the plugin is supported and how frequently it is updated. Since the Cordova platform and the mobile platforms continue to evolve, those that are well-supported are likely to be those that have good activity in their repo.

Keep in mind that the XDK builds Cordova apps, so whichever plugins you find being supported and working best with other Cordova (or PhoneGap) apps would likely be your "best" choice.

See Adding Plugins to Your Intel® XDK Cordova* App for instructions on how to include third-party plugins with your app.

What are the rules for my App ID?

The precise App ID naming rules vary as a function of the target platform (eg., Android, iOS, Windows, etc.). Unfortunately, the App ID naming rules are further restricted by the Apache Cordova project and sometimes change with updates to the Cordova project. The Cordova project is the underlying technology that your Intel XDK app is based upon; when you build an Intel XDK app you are building an Apache Cordova app.

CLI 5.1.1 has more restrictive App ID requirements than previous versions of Apache Cordova (the CLI version refers to Apache Cordova CLI release versions). In this case, the Apache Cordova project decided to set limits on acceptable App IDs to equal the minimum set for all platforms. We hope to eliminate this restriction in a future release of the build system, but for now (as of the 2496 release of the Intel XDK), the current requirements for CLI 5.1.1 are:

  • Each section of the App ID must start with a letter
  • Each section can only consist of letters, numbers, and the underscore character
  • Each section cannot be a Java keyword
  • The App ID must consist of at least 2 sections (each section separated by a period ".").

 

iOS /usr/bin/codesign error: certificate issue for iOS app?

If you are getting an iOS build fail message in your detailed build log that includes a reference to a signing identity error you probably have a bad or inconsistent provisioning file. The "no identity found" message in the build log excerpt, below, means that the provisioning profile does not match the distribution certificate that was uploaded with your application during the build phase.

Signing Identity:     "iPhone Distribution: XXXXXXXXXX LTD (Z2xxxxxx45)"
Provisioning Profile: "MyProvisioningFile"
                      (b5xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxe1)

    /usr/bin/codesign --force --sign 9AxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxA6 --resource-rules=.../MyApp/platforms/ios/build/device/MyApp.app/ResourceRules.plist --entitlements .../MyApp/platforms/ios/build/MyApp.build/Release-iphoneos/MyApp.build/MyApp.app.xcent .../MyApp/platforms/ios/build/device/MyApp.app
9AxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxA6: no identity found
Command /usr/bin/codesign failed with exit code 1

** BUILD FAILED **


The following build commands failed:
    CodeSign build/device/MyApp.app
(1 failure)

The excerpt shown above will appear near the very end of the detailed build log. The unique number patterns in this example have been replaced with "xxxx" strings for security reasons. Your actual build log will contain hexadecimal strings.

iOS Code Sign error: bundle ID does not match app ID?

If you are getting an iOS build fail message in your detailed build log that includes a reference to a "Code Sign error" you may have a bad or inconsistent provisioning file. The "Code Sign" message in the build log excerpt, below, means that the bundle ID you specified in your Apple provisioning profile does not match the app ID you provided to the Intel XDK to upload with your application during the build phase.

Code Sign error: Provisioning profile does not match bundle identifier: The provisioning profile specified in your build settings (MyBuildSettings) has an AppID of my.app.id which does not match your bundle identifier my.bundleidentifier.
CodeSign error: code signing is required for product type 'Application' in SDK 'iOS 8.0'

** BUILD FAILED **

The following build commands failed:
    Check dependencies
(1 failure)
Error code 65 for command: xcodebuild with args: -xcconfig,...

The message above translates into "the bundle ID you entered in the project settings of the XDK does not match the bundle ID (app ID) that you created on Apples developer portal and then used to create a provisioning profile."

iOS build error?

If your iOS build is failing with Error code 65 with Xcodebuild in the error log, most likely there are issues with certificate and provisioning profile. Sometimes Xcode gives specific errors as “Provisioning profile does not match bundle identifier” and other times something like "Code Sign error: No codesigning identities found: No code signing identities". The root of the  issues come from not providing the correct certificate (P12 file) and/or provisioning profile or mismatch between P12 and provisioning profile. You have to make sure your P12 and provisioning profile are correct. The provisioning profile has to be generated using the certificate you used to create the P12 file.  Also, your app ID you provide in XDK build settings has to match the app ID created on the Apple Developer portal and the same App ID has to be used when creating a provisioning profile. 

Please follow these steps to generate the P12 file.

  1. Create a .csr file from Intel XDK (do not close the dialog box to upload .cer file)
  2. Click on the link Apple Developer Portal from the dialog box (do not close the dialog box in XDK)
  3. Upload .csr on Apple Developer Portal
  4. Generate certificate on Apple developer portal
  5. Download .cer file from the Developer portal
  6. Come back to XDK dialog box where you left off from step 1, press Next. Select .cer file that you got from step 5 and generate .P12 file
  7. Create an appID on Apple Developer Portal
  8. Generate a Provisioning Profile on Apple Developer Portal using the certificate you generated in step 4 and appID created in step 7
  9. Provide the same appID (step 7), P12 (step 6) and Provisioning profile (step 8) in Intel XDK Build Settings 

Few things to check before you build:  

  1.  Make sure your certificate has not expired
  2. The appID you created on Apple developer portal matches with the appID you provided in the XDK build settings
  3. You are using  provisioning profile that is associated with the certificate you are using to build the app
  4. Apple allows only 3 active certificate, if you need to create a new one, revoke one of the older certificate and create a new one.

This App Certificate Management video shows how to create a P12 and provisioning profile , the P12 creation part is at 16:45 min. Please follow the process for creating a P12 and generating Provisioning profile as shown in the video. Or follow this Certificate Management document

What are plugin variables used for? Why do I need to supply plugin variables?

Some plugins require details that are specific to your app or your developer account. For example, to authorize your app as an app that belongs to you, the developer, so services can be properly routed to the service provider. The precise reasons are dependent on the specific plugin and its function.

What happened to the Intel XDK "legacy" build options?

On December 14, 2015 the Intel XDK legacy build options were retired and are no longer available to build apps. The legacy build option is based on three year old technology that predates the current Cordova project. All Intel XDK development efforts for the past two years have been directed at building standard Apache Cordova apps.

Many of the intel.xdk legacy APIs that were supported by the legacy build options have been migrated to standard Apache Cordova plugins and published as open source plugins. The API details for these plugins are available in the README.md files in the respective 01.org GitHub repos. Additional details regarding the new Cordova implementations of the intel.xdk legacy APIs are available in the doc page titled Intel XDK Legacy APIs.

Standard Cordova builds do not require the use of the "intelxdk.js" and "xhr.js" phantom scripts. Only the "cordova.js" phantom script is required to successfully build Cordova apps. If you have been including "intelxdk.js" and "xhr.js" in your Cordova builds they have been quietly ignored. You should remove references to these files from your "index.html" file; leaving them in will do no harm, it simply results in a warning that the respective script file cannot be found at runtime.

The Emulate tab will continue to support some legacy intel.xdk APIs that are NOT supported in the Cordova builds (only those intel.xdk APIs that are supported by the open source plugins are available to a Cordova built app, and only if you have included the respective intel.xdk plugins). This Emulate tab discrepancy will be addressed in a future release of the Intel XDK.

More information can be found in this forum post > https://software.intel.com/en-us/forums/intel-xdk/topic/601436.

Which build files do I submit to the Windows Store and which do I use for testing my app on a device?

There are two things you can do with the build files generated by the Intel XDK Windows build options: side-load your app onto a real device (for testing) or publish your app in the Windows Store (for distribution). Microsoft has changed the files you use for these purposes with each release of a new platform. As of December, 2015, the packages you might see in a build, and their uses, are:

  • appx works best for side-loading, and can also be used to publish your app.
  • appxupload is preferred for publishing your app, it will not work for side-loading.
  • appxbundle will work for both publishing and side-loading, but is not preferred.
  • xap is for legacy Windows Phone; works for both publishing and side-loading.

In essence: XAP (WP7) was superseded by APPXBUNDLE (Win8 and WP8.0), which was superseded by APPX (Win8/WP8.1/UAP), which has been supplemented with APPXUPLOAD. APPX and APPXUPLOAD are the preferred formats. For more information regarding these file formats, see Upload app packages on the Microsoft developer site.

Side-loading a Windows Phone app onto a real device, over USB, requires a Windows 8+ development system (see Side-Loading Windows* Phone Apps for complete instructions). If you do not have a physical Windows development machine you can use a virtual Windows machine or use the Window Store Beta testing and targeted distribution technique to get your app onto real test devices.

Side-loading a Windows tablet app onto a Windows 8 or Windows 10 laptop or tablet is simpler. Extract the contents of the ZIP file that you downloaded from the Intel XDK build system, open the "*_Test" folder inside the extracted folder, and run the PowerShell script (ps1 file) contained within that folder on the test machine (the machine that will run your app). The ps1 script file may need to request a "developer certificate" from Microsoft before it will install your test app onto your Windows test system, so your test machine may require a network connection to successfully side-load your Windows app.

The side-loading process may not over-write an existing side-loaded app with the same ID. To be sure your test app properly side-loads, it is best to uninstall the old version of your app before side-loading a new version on your test system.

How do I implement local storage or SQL in my app?

See this summary of local storage options for Cordova apps written by Josh Morony, A Summary of Local Storage Options for PhoneGap Applications.

How do I prevent my app from auto-completing passwords?

Use the Ionic Keyboard plugin and set the spellcheck attribute to false.

Why does my PHP script not run in my Intel XDK Cordova app?

Your XDK app is not a page on a web server; you cannot use dynamic web server techniques because there is no web server associated with your app to which you can pass off PHP scripts and similar actions. When you build an Intel XDK app you are building a standalone Cordova client web app, not a dynamic server web app. You need to create a RESTful API on your server that you can then call from your client (the Intel XDK Cordova app) and pass and return data between the client and server through that RESTful API (usually in the form of a JSON payload).

Please see this StackOverflow post and this article by Ray Camden, a longtime developer of the Cordova development environment and Cordova apps, for some useful background.

Following is a lightly edited recommendation from an Intel XDK user:

I came from php+mysql web development. My first attempt at an Intel XDK Cordova app was to create a set of php files to query the database and give me the JSON. It was a simple job, but totally insecure.

Then I found dreamfactory.com, an open source software that automatically creates the REST API functions from several databases, SQL and NoSQL. I use it a lot. You can start with a free account to develop and test and then install it in your server. Another possibility is phprestsql.sourceforge.net, this is a library that does what I tried to develop by myself. I did not try it, but perhaps it will help you.

And finally, I'm using PouchDB and CouchDB"A database for the web." It is not SQL, but is very useful and easy if you need to develop a mobile app with only a few tables. It will also work with a lot of tables, but for a simple database it is an easy place to start.

I strongly recommend that you start to learn these new ways to interact with databases, you will need to invest some time but is the way to go. Do not try to use MySQL and PHP the old fashioned way, you can get it work but at some point you may get stuck.

Why doesn’t my Cocos2D game work on iOS?

This is an issue with Cocos2D and is not a reflection of our build system. As an interim solution, we have modified the CCBoot.js file for compatibility with iOS and App Preview. You can view an example of this modification in this CCBoot.js file from the Cocos2d-js 3.1 Scene GUI sample. The update has been applied to all cocos2D templates and samples that ship with Intel XDK. 

The fix involves two lines changes (for generic cocos2D fix) and one additional line (for it to work on App Preview on iOS devices):

Generic cocos2D fix -

1. Inside the loadTxt function, xhr.onload should be defined as

xhr.onload = function () {
    if(xhr.readyState == 4)
        xhr.responseText != "" ? cb(null, xhr.responseText) : cb(errInfo);
    };

instead of

xhr.onload = function () {
    if(xhr.readyState == 4)
        xhr.status == 200 ? cb(null, xhr.responseText) : cb(errInfo);
    };

2. The condition inside _loadTxtSync function should be changed to 

if (!xhr.readyState == 4 || (xhr.status != 200 || xhr.responseText != "")) {

instead of 

if (!xhr.readyState == 4 || xhr.status != 200) {

 

App Preview fix -

Add this line inside of loadTxtSync after _xhr.open:

xhr.setRequestHeader("iap_isSyncXHR", "true");

How do I change the alias of my Intel XDK Android keystore certificate?

You cannot change the alias name of your Android keystore within the Intel XDK, but you can download the existing keystore, change the alias on that keystore and upload a new copy of the same keystore with a new alias.

Use the following procedure:

  • Download the converted legacy keystore from the Intel XDK (the one with the bad alias).

  • Locate the keytool app on your system (this assumes that you have a Java runtime installed on your system). On Windows, this is likely to be located at %ProgramFiles%\Java\jre8\bin (you might have to adjust the value of jre8 in the path to match the version of Java installed on your system). On Mac and Linux systems it is probably located in your path (in /usr/bin).

  • Change the alias of the keystore using this command (see the keytool -changealias -help command for additional details):

keytool -changealias -alias "existing-alias" -destalias "new-alias" -keypass keypass -keystore /path/to/keystore -storepass storepass
  • Import this new keystore into the Intel XDK using the "Import Existing Keystore" option in the "Developer Certificates" section of the "person icon" located in the upper right corner of the Intel XDK.

What causes "The connection to the server was unsuccessful. (file:///android_asset/www/index.html)" error?

See this forum thread for some help with this issue. This error is most likely due to errors retrieving assets over the network or long delays associated with retrieving those assets.

How do I manually sign my Android or Crosswalk APK file with the Intel XDK?

To sign an app manually, you must build your app by "deselecting" the "Signed" box in the Build Settings section of the Android tab on the Projects tab:

Follow these Android developer instructions to manually sign your app. The instructions assume you have Java installed on your system (for the jarsigner and keytool utilities). You may have to locate and install the zipalign tool separately (it is not part of Java) or download and install Android Studio.

These two sections of the Android developer Signing Your Applications article are also worth reading:

Why should I avoid using the additions.xml file? Why should I use the Plugin Management Tool in the Intel XDK?

Intel XDK (2496 and up) now includes a Plugin Management Tool that simplifies adding and managing Cordova plugins. We urge all users to manage their plugins from existing or upgraded projects using this tool. If you were using intelxdk.config.additions.xml file to manage plugins in the past, you should remove them and use the Plugin Management Tool to add all plugins instead.

Why you should be using the Plugin Management Tool:

  • It can now manage plugins from all sources. Popular plugins have been added to the the Featured plugins list. Third party plugins can be added from the Cordova Plugin Registry, Git Repo and your file system.

  • Consistency: Unlike previous versions of the Intel XDK, plugins you add are now stored as a part of your project on your development system after they are retrieved by the Intel XDK and copied to your plugins directory. These plugin files are delivered, along with your source code files, to the Intel XDK cloud-based build server. This change ensures greater consistency between builds, because you always build with the plugin version that was retrieved by the Intel XDK into your project. It also provides better documentation of the components that make up your Cordova app, because the plugins are now part of your project directory. This is also more consistent with the way a standard Cordova CLI project works.

  • Convenience: In the past, the only way to add a third party plugin that required parameters was to include it in the intelxdk.config.additions.xml file. This plugin would then be added to your project by the build system. This is no longer recommended. With the new Plugin Management Tool, it automatically parses the plugin.xml file and prompts to add any plugin variables from within the XDK.

    When a plugin is added via the Plugin Management Tool, a plugin entry is added to the project file and the plugin source is downloaded to the plugins directory making a more stable project. After a build, the build system automatically generates config xml files in your project directory that includes a complete summary of plugins and variable values.

  • Correctness of Debug Module: Intel XDK now provides remote on-device debugging for projects with third party plugins by building a custom debug module from your project plugins directory. It does not write or read from the intelxdk.config.additions.xml and the only time this file is used is during a build. This means the debug module is not aware of your plugin added via the intelxdk.config.additions.xml file and so adding plugins via intelxdk.config.additions.xml file should be avoided. Here is a useful article for understanding Intel XDK Build Files.

  • Editing Plugin Sources: There are a few cases where you may want to modify plugin code to fix a bug in a plugin, or add console.log messages to a plugin's sources to help debug your application's interaction with the plugin. To accomplish these goals you can edit the plugin sources in the plugins directory. Your modifications will be uploaded along with your app sources when you build your app using the Intel XDK build server and when a custom debug module is created by the Debug tab.

How do I fix this "unknown error: cannot find plugin.xml" when I try to remove or change a plugin?

Removing a plugin from your project generates the following error:

Sometimes you may see this error:

This is not a common problem, but if it does happen it means a file in your plugin directory is probably corrupt (usually one of the json files found inside the plugins folder at the root of your project folder).

The simplest fix is to:

  • make a list of ALL of your plugins (esp. the plugin ID and version number, see image below)
  • exit the Intel XDK
  • delete the entire plugins directory inside your project
  • restart the Intel XDK

The XDK should detect that all of your plugins are missing and attempt to reinstall them. If it does not automatically re-install all or some of your plugins, then reinstall them manually from the list you saved in step one (see the image below for the important data that documents your plugins).

NOTE: if you re-install your plugins manually, you can use the third-party plugin add feature of the plugin management system to specify the plugin id to get your plugins from the Cordova plugin registry. If you leave the version number blank the latest version of the plugin that is available in the registry will be retrieved by the Intel XDK.

Why do I get a "build failed: the plugin contains gradle scripts" error message?

You will see this error message in your Android build log summary whenever you include a Cordova plugin that includes a gradle script in your project. Gradle scripts add extra Android build instructions that are needed by the plugin.

The current Intel XDK build system does not allow the use of plugins that contain gradle scripts because they present a security risk to the build system and your Intel XDK account. An unscrupulous user could use a gradle-enabled plugin to do harmful things with the build server. We are working on a build system that will insure the necessary level of security to allow for gradle scripts in plugins, but until that time, we cannot support those plugins that include gradle scripts.

The error message in your build summary log will look like the following:

In some cases the plugin gradle script can be removed, but only if you manually modify the plugin to implement whatever the gradle script was doing automatically. In some cases this can be done easily (for example, the gradle script may be building a JAR library file for the plugin), but sometimes the plugin is not easily modified to remove the need for the gradle script. Exactly what needs to be done to the plugin depends on the plugin and the gradle script.

You can find out more about Cordova plugins and gradle scripts by reading this section of the Cordova documentation. In essence, if a Cordova plugin includes a build-extras.gradle file in the plugin's root folder, or if it contains one or more lines similar to the following, inside the plugin.xml file:

<framework src="some.gradle" custom="true" type="gradleReference" />

it means that the plugin contains gradle scripts and will be rejected by the Intel XDK build system.

How does one remove gradle dependencies for plugins that use Google Play Services (esp. push plugins)?

Our Android (and Crosswalk) CLI 5.1.1 and CLI 5.4.1 build systems include a fix for an issue in the standard Cordova build system that allows some Cordova plugins to be used with the Intel XDK build system without their included gradle script!

This fix only works with those Cordova plugins that include a gradle script for one and only one purpose: to set the value of applicationID in the Android build project files (such a gradle script copies the value of the App ID from your project's Build Settings, on the Projects tab, to this special project build variable).

Using the phonegap-plugin-push as an example, this Cordova plugin contains a gradle script named push.gradle, that has been added to the plugin and looks like this:

import java.util.regex.Pattern

def doExtractStringFromManifest(name) {
    def manifestFile = file(android.sourceSets.main.manifest.srcFile)
    def pattern = Pattern.compile(name + "=\"(.*?)\"")
    def matcher = pattern.matcher(manifestFile.getText())
    matcher.find()
    return matcher.group(1)
}

android {
    sourceSets {
        main {
            manifest.srcFile 'AndroidManifest.xml'
        }
    }

    defaultConfig {
        applicationId = doExtractStringFromManifest("package")
    }
}

All this gradle script is doing is inserting your app's "package ID" (the "App ID" in your app's Build Settings) into a variable called applicationID for use by the build system. It is needed, in this example, by the Google Play Services library to insure that calls through the Google Play Services API can be matched to your app. Without the proper App ID the Google Play Services library cannot distinguish between multiple apps on an end user's device that are using the Google Play Services library, for example.

The phonegap-plugin-push is being used as an example for this article. Other Cordova plugins exist that can also be used by applying the same technique (e.g., the pushwoosh-phonegap-plugin will also work using this technique). It is important that you first determine that only one gradle script is being used by the plugin of interest and that this one gradle script is used for only one purpose: to set the applicationID variable.

How does this help you and what do you do?

To use a plugin with the Intel XDK build system that includes a single gradle script designed to set the applicationID variable:

  • Download a ZIP of the plugin version you want to use (e.g. version 1.6.3) from that plugin's git repo.

    IMPORTANT:be sure to download a released version of the plugin, the "head" of the git repo may be "under construction" -- some plugin authors make it easy to identify a specific version, some do not, be aware and careful when choosing what you clone from a git repo! 

  • Unzip that plugin onto your local hard drive.

  • Remove the <framework> line that references the gradle script from the plugin.xml file.

  • Add the modified plugin into your project as a "local" plugin (see the image below).

In this example, you will be prompted to define a variable that the plugin also needs. If you know that variable's name (it's called SENDER_ID for this plugin), you can add it using the "+" icon in the image above, and avoid the prompt. If the plugin add was successful, you'll find something like this in the Projects tab:

If you are curious, you can inspect the AndroidManifest.xml file that is included inside your built APK file (you'll have to use a tool like apktool to extract and reconstruct it from you APK file). You should see something like the following highlighted line, which should match your App ID, in this example, the App ID was io.cordova.hellocordova:

If you see the following App ID, it means something went wrong. This is the default App ID for the Google Play Services library that will cause collisions on end-user devices when multiple apps that are using Google Play Services use this same default App ID:

Back to FAQs Main

Intel® Parallel Computing Center at Computational Fluid Dynamic department (ONERA)

$
0
0

Principal Investigators:

Alain ReflochAlain Refloch integrated ONERA in 1990 and was in charge of the user’s support for the scientific computation. He was at the origin of the unit ‘Software engineering and HPC’ in 2000. He became project Leader of the CEDRE software in 2003 (see reference paper) and joined the Computational Fluid Dynamics and AeroAcoustics Department. A. Refloch is a member of the scientific council of the ORAP since 2009.

He was the co-organizer of the International Workshop on High Performance Computing – Computational Fluid Dynamics (HPC-CFD) in Energy/Transport Domains (16th IEEE High Performance Computing and Communication in Paris and ISC'15 in Frankfurt). Today he’s Special Advisor for HPC.

Ivan MaryIvan Mary obtained his PhD in 1999 at Paris-Orsay University in the field of numerical methods for CFD. He joined ONERA in 2000 with the mission to develop methods and software allowing efficient Large Eddy Simulation (LES) and Direct Numerical (DNS) simulations of turbulent flow around complex configurations. He has supervised around 10 PhD students during the last 15 years in the fields of numerical method, fluid dynamics and turbulence modelling. Since 2011, he has focused a large part of his work on HPC, because this is a crucial point for unsteady computations of turbulent flows based (re-engineering, coarse-grain OpenMP parallelization, and vectorization). Since 2015, He is in charge of the Fast demonstrator, which must provide the HPC basis of the next generation elsA software

Description:

The ONERA CFD department develops and supports fluid dynamics software for decades both for its own research and for industrial partners in the aeronautical domain. Nowadays the elsA software, developed at ONERA since 1997, is one of the major CFD tools used by Airbus, Eurocopter and Safran. In their design services, it is massively employed to optimize airplane performance (noise or energy consumption reduction, safety improvement). Due to environmental constraint, noise reduction in the vicinity of airports has become a major challenge for aircraft manufacturer. The noise radiated during the landing phase is due to turbulent vortices generated by landing gears and flaps in the wings, which act like powerful whistles. The numerical simulation of the generated noise requires to handle the complex detailed geometry of landing gear or flaps and to solve billions of unknowns at each time step to describe the time evolution of turbulence vortices during millions of time step in order to compute few seconds of the physical time.

Therefore HPC capabilities, complex geometries (re)meshing and multiphysics coupling (noise generator and propagator) are crucial points for the efficiency of the software to obtain a solution in a reasonable time. For these reasons, a demonstrator named FAST (Flexible Aerodynamic Solver Technology) is under development since 1-2 year in order to prepare a major evolution of elsA in the coming years. This demonstrator aims to provide a software architecture and numerical techniques which will allow better flexibility, evolutivity and efficiency in order to perform simulations out of reach with the actual CFD tools. Thanks to previous expertise, services reclaimed by CFD simulations (pre/post-processing, boundary conditions, solvers, coupling, etc.) are provided by different Python modules in FAST, whereas the CFD General Notation System (CGNS) standard is adopted as a data model, but also for the implementation of this data model in order to facilitate interoperability between modules. To improve flexibility in the meshing of complex geometrical details, an automatic cartesian grid generator, immersed boundary condition and chimera technique will be employed during the present Intel® PCC project to compute the noise generated by the LAGOON landing gear configuration (Lagoon). Thanks to code modernization (memory access, vectorization, etc.) we aim to reduce by at least one order of magnitude the CPU cost of this kind of computation on actual Intel® Xeon and future Intel® Xeon Phi™ processor family.

Complex Structures of Dynamic Stall by LES

Complex Structures of Dynamic Stall by LES" by Ivan Mary (ONERA)

Related websites:

http://www.onera.fr
http://elsa.onera.fr
http://elsa.onera.fr/Cassiopee
http://www.hpctoday.com/state-of-the-art/processor-evolution-what-to-prepare-application-codes-for

Using Libraries in Your IoT Project

$
0
0

Functions and Libraries

Functions are blocks of code that focus on specific tasks. There are two main types of functions: those packaged in a library and those you define in your code. Programming languages such as C++, Python*, Java*, and Node.js* provide built-in functions that you can call at any time. Functions go by different names in different programming languages, such as method, subroutine, and procedure.

Libraries are collections of functions. For example, sqrt(x) is a mathematical C++ built-in function within the math.h C++ library; it facilitates the computation of the square root of a given number. Such a library includes multiple functions, such as logarithm (log10) and exponentials (exp). For Internet of Things (IoT)–related projects, two main libraries are particularly useful: Libmraa* (MRAA) and Useful Packages & Modules (UPM). When working with hardware, you need tools to communicate with the various parts of your board and its connected sensors. These libraries come with definitions for the various sensors and are cross compatible with multiple boards.

Libmraa*

If you want to use Libmraa*, you must first install it on your Intel® Galileo or Intel® Edison board. The library enables you to interact with the board through the functions the library provides. Libmraa has defined configurations for each board and its components. It provides an application programming interface (API) for interfacing with low-level peripherals, such as general-purpose input/output (GPIO), pulse width modulation (PWM), Bluetooth* low energy, interintegrated circuit (I2C), serial peripheral interface (SPI), and Universal Asynchronous Receiver/Transmitter (UART). The physical pins of chips and sensors map to Libmraa. You don’t need to know the details of how communication between the various components happens: The library takes care of which board and breakout pins are connected. Figure 1 illustrates the pins located on the Edison board, and Table 1 reflects a small subset of pins. You can see that the I2C pin is physical pin J17-pin 8, which is mapped in MRAA to 7; also, PWM is shown as physical pins J18-pin 7, with MRAA mapping to pin 20. This way, when you’re programming an MRAA pin, it will communicate with its mapped physical pin on the board. You can see the full breakout board and pins in the paper Intel® Edison Breakout Board.

Figure 1. The Intel® Edison board, with a subset of its pins

Intel® Edison board

Table 1. Subset of pins on the Intel® Edison board

Pin  Description
J17 - pin 1GP182_PWM2 GPIO capable of PWM output
J17 - pin 5GP135UART2_TXGPIO, UART2 transmit output
J17 - pin 8GP20I2C1_SDAGPIO, I2C1 data open collector
J18 - pin 7GP12_PWM0 GPIO capable of PWM output
J18 - pin 8GP183_PWM3 GPIO capable of PWM output
J18 - pin 12GP129UART1_RTSGPIO, UART1 ready to send output

Within Libmraa are multiple API classes, each with numerous functions that you can use for your IoT projects. Here is a small list of those functions:

GPIO class. GPIO interface to Libmraa; functions include:

  • mraa_gpio_init_raw function
  • mraa_gpio_context
  • mraa_gpio_dir

I2C class. I2C to Libmraa; functions include:

  • mraa_i2c_init
  • mraa_i2c_read

AIO class. Analog input/output (AIO) interface to Libmraa; functions include:

  • mraa_aio_init
  • mraa_aio_set_bit

PWM class. PWM interface to Libmraa; functions include:

  • mraa_pwm_init_raw
  • mraa_pwm_period_ms

SPI class. SPI to Libmraa; functions include:

  • mraa_spi_write
  • mraa_spi_transfer_buf
  • mraa_spi_bit_per_word

UART class. UART to Libmraa; functions include:

  • mraa_uart_set_baudrate
  • mraa_uart_set_flowcontrol
  • mraa_uart_set_timeout

COMMON class. Defines the basic shared values for Libmraa; functions include:

  • mraa_adc_raw_bits
  • mraa_get_i2c_bus_count
  • mraa_get_platform_type

For a full list and description of functions that each class supports, see mraa documentation.

To use all the functionality that MRAA provides, you must include mraa.h in your code. Figure 2 shows a C example for connecting an LED to D5 on your board and using GPIO functions. The outcome is a blinking LED; the sleep function controls how often the LED will be in the On and Off states.

Figure 2. C example using MRAA and general-purpose input/output

C example using MRAA

Useful Packages & Modules

UPM provides software drivers for a variety of commonly used sensors and actuator. Drivers enable the hardware to communicate with the operating system. UPM is a level above MRAA and assists with object control of elements such as RGB LCDs and temperature sensors. The list of supported sensors is vast and includes accelerometers, buttons, color sensors, gas sensor, global positioning system (GPS), radio-frequency identification, and servos. See the full list.

Figure 3 illustrates how you can use the UPM library to read the input of a button. First, you need to include the library grove.hpp, which allows you to use the upm::GroveButton function to create an object for the button and later use the name () and value () functions to obtain the desired values from the sensors. See additional UPM examples.

Figure 3. Useful Packages & Modules example, with definitions for functions and libraries

Useful Packages & Modules

Integrated development environments (IDEs) such as Eclipse* and Intel® XDK IoT Edition come with MRAA and UPM as integrated libraries. The supported IDEs are available for download from Intel® IoT Developer Kit Integrated Development Environments. When using Eclipse, you can choose to create a new IoT project that will include a couple of tools not available in other types of projects (Figure 4).

Figure 4. Creating a new Internet of Things project in Eclipse*

Creating new IoT project

The IoT Sensor Support tool, shown in Figure 5, contains the list of sensors and actuators, with a description and picture of the hardware. Simply select all the hardware you will include in your project, and the tool automatically adds the related libraries to your code. For example, if you will be using the MMA7455 accelerometer, the tool will append to your current .ccp code with #include <mma7455.h>.

Figure 5. Internet of Things sensors and libraries included in Eclipse*

IoT sensors and libraries

Libmraa and UPM work hand-in-hand to provide the tools you need to interact easily with the elements in your project. They remove the hassle of knowing the pins and requirements of each piece of hardware, packaging all the specifics into friendly, easy-to-use libraries. From prototyping an automatic pet feeder to creating a wearable that uses GPS, an accelerometer, and an RGB LCD to track your movements, you can develop your projects with the help of MRAA and UPM.

Both libraries are open source projects, with dedicated GitHub* repositories (see the upm and mraa repositories). Developers are welcome to assist with enhancing and writing new APIs and functions, and reviewing documentation. Contributors are required to follow a coding and documentation style. Thanks to the open source community’s efforts, new functions and libraries are added frequently for various coding languages.

Summary

Libraries and their functions help with cleaner, more efficient code that you can reuse. For built-in functions, you don’t need to worry about how the computation is performed: As long as you implement the function with the required input, it will generate the desired output. For your IoT projects, Libmraa and UPM are essential libraries.

For More Information

Getting Started with Galileo and Arduino

$
0
0

 

This article describes how to get started with the Intel® Galileo board and the Arduino* IDE.

If you prefer developing with Java*, JavaScript, or C++, see Programming Options,” below.

Hardware requirements

  • An Intel® Galileo Gen 1 or Gen 2 board.
  • One power supply for the Galileo board, either 5V DC (for Gen 1) or 7–15V DC (for Gen 2). Your power supply should be included in the packaging along with your board.
  • One micro-USB cable (Micro B to Type A).
  • A Windows*, Mac* OS X*, or Linux* computer.

Set up the board

  1. If you have a micro-SD card inserted into your board, remove it.
  2. Plug in the power supply to your board. Always plug in the power supply before the USB cable.
  3. Plug in the micro-USB cable to your board's USB client port. Plug the other end in to your computer.

On the Intel® Galileo Gen 1 board, your setup should look like this:

Intel® Galileo Gen 1 board

On the Intel® Galileo Gen 2 board, your setup should look like this:

Intel® Galileo Gen 2 board

Next Steps

Your next steps depend on the OS you are using: 64-bit Windows*, 32-bit Windows*, OS X*, or Linux*.

For Mac* OS X* or Linux*

You can skip to the next section; there’s no further setup required.

For 64-bit Windows*

To install the drivers required by the Arduino* IDE, run the Galileo Windows 64-bit Arduino installer, which is here: https://software.intel.com/en-us/iot/hardware/galileo/downloads, listed under Installer.

The installer includes a step to create a micro SD card. This step is optional; create a micro SD card only if you want to set up Wi-Fi driver support and sketch persistence on the board.

When finished, continue to the next section.

For 32-bit Windows*

To install the drivers required by the Arduino IDE, follow the instructions to install and run the Firmware Updater tool, listed here: https://software.intel.com/en-us/installing-drivers-and-updating-firmware-for-arduino-windows.

When finished, continue to the next section.

Installing the Arduino* IDE

Install the Arduino IDE for your OS from the appropriate section on this page: https://software.intel.com/en-us/get-started-arduino-install.

When finished, continue to the next section.

Blinking the board’s LED with the Arduino* IDE

To blink the LED on your Galileo board, run the Blink example sketch in in the Arduino IDE as described in the Running Arduino section on this page: https://software.intel.com/en-us/get-started-arduino-blink)

When the LED blinks, you’re done, and have successfully set up your Galileo board.

Programming options

You can program your Galileo board using C++ or Java* with the Intel® System Studio IoT Edition, or using JavaScript with the Intel® XDK IoT Edition.

The Intel® System Studio IoT Edition is a plugin for Eclipse* that supports Java and C/C++ projects. It allows you to connect to, update, and program IoT projects on the Galileo board, as well as add sensors to a project, take advantage of the example code provided with the Intel System Studio IoT Edition, and more.

The Intel System Studio IoT Edition provides two libraries, specially designed for the Intel IoT Developer Kit:

  • MRAA is a low-level library that offers a translation from the input/output interfaces to the pins available on your Galileo board.
  • UPM is a sensor library that utilizes MRAA and provides multiple-language support. UPM allows you to conveniently use or create sensor representations for your projects.

The Intel® XDK IoT Edition supports embedded JavaScript projects with node.JS, and provides a comprehensive, cross-platform development environment for developing and building hybrid HTML5 mobile and web apps.

Make a bootable micro SD card

A bootable micro SD card is required for using the Galileo board with the Intel® System Studio IoT Edition or the Intel® XDK IoT Edition.

Depending on your OS, follow the instructions for making a bootable micro SD card.


Intel XDK Supported Systems

$
0
0

Intel® XDK - Release - 2016 August 2, v3491

Intel XDK is a cross platform development environment.
Develop applications on Microsoft Windows*, Apple OS X* and Ubuntu Linux*.
Build apps to target Android*, iOS*, Windows or Web Apps.
Access device capabilities with Apache Cordova* APIs.
Choose Crosswalk Project web runtime on Android for an updated webview.

Supported Versions

Development systems
OSVersion
Microsoft Windows Desktop 7, 8 and 10 [3]
Apple OS X10.7 (Lion) or newer [1] 
Ubuntu Linux14.04 (Trusty Tahr) or newer [2]                                                                        

 

App Targets
OSVersion
AndroidAPI Level 14 (Ice Cream Sandwich 4.0.1) or newer [4]
iOS6.0 to 9.2 [5]
WindowsWindows 10 UAP [6] , Windows 8 [7] and Windows Phone 8.1 [8]                                                      
Web AppBrowser support for HTML5 [9]

 

Crosswalk runtime 
Crosswalk Project runtime,
Android Embedded build
Release 14, 15 or 16  (=> Chromium versions 43, 44 or 45) [12]                         
Crosswalk Project runtime,
Android ​Shared build
Release 17 (=> Chromium version 46) [12]

 

Cordova API:  CLI and "pinned" framework [13]   
Cordova CLI                        Android                           iOS                                   Windows                             
5.1.1 [11] [Deprecated]4.1.13.8.04.0.0
5.4.1 [10a,b] 5.0.04.0.14.3.1
6.2.0 [14]5.1.14.1.14.3.2

 

References:

[1] Apple OS X versions- https://en.wikipedia.org/wiki/OS_X#Versions

[2] Ubuntu Linux versions- https://wiki.ubuntu.com/Releases

[3] Windows versions- https://en.wikipedia.org/wiki/List_of_Microsoft_Windows_versions#Client_versions

[4] Android Versions dashboard - https://source.android.com/source/build-numbers.html

[5] iOS versions - https://en.wikipedia.org/wiki/IOS_version_history

[6] Windows 10 Universal App Platform - https://en.wikipedia.org/wiki/IOS_version_history

[7] Windows 8 - http://windows.microsoft.com/en-us/windows-8/apps-windows-store-tutorial

[8] Windows Phone 8.1 - https://www.windowsphone.com/en-US/features-8-1

[9] HTML5 Browser Support, Relies on Browser support of HTML5 features, see http://caniuse.com/

[10a] Apache Cordova version 5.4.0 - https://cordova.apache.org/docs/en/5.4.0/guide/overview/index.html

[10b] Apache Cordova version 5.4.1 RN - http://cordova.apache.org/news/2015/11/24/tools-release.html

[11] Apache Cordova version 5.1.1 - https://cordova.apache.org/docs/en/5.1.1/guide/overview/index.html

[12] Crosswalk Release and Chromium Version - https://github.com/crosswalk-project/crosswalk-website/wiki/Release-dates

[13] FAQ, Understanding Cordova CLI and Pinned versions https://software.intel.com/en-us/xdk/faqs/cordova#cordova-version

[14] Apache Cordova version 6.2.1  - '6.x, Latest',  https://cordova.apache.org/docs/en/latest/index.html

 

Intel® Showcases New Talent; Drexel University Punches Through The Competition

$
0
0

Download PDF

Each year, the Intel® University Games Showcase (IUGS), is the place-to-be to get a first look at some of the most innovative interactive entertainment being developed today. Held in conjunction with the Game Developers Conference (GDC), the competition is intense, with teams from the top ten academic game-developer programs in the U.S. competing for recognition, bragging rights, and $35,000 in hardware prizes.

The third annual showcase, held earlier this year, saw the Best Gameplay award go to Mirrors of Grimaldi, a local multiplayer game developed by the talented 51st and Fire team from Drexel University. A supportive audience, comprising nearly 500 industry professionals, students, the press, and media influencers, was on hand to cheer and encourage. The competition was judged by leading lights from the games industry.

For the Drexel team, the key to winning their category was harnessing the benefits of ideation to come up with a fresh and novel approach to a familiar game concept, namely surviving an attack by a horde of henchmen. This article details the team’s journey from initial concept, through development, to a fully-realized, award-winning game.

Reflecting On Grimaldi

At first glance, Mirrors of Grimaldi looks like a conventional, four-player, split-screen game. (The name, incidentally, references Joseph Grimaldi, a popular English actor of the Regency era who singlehandedly defined our modern image of a clown). Using a medieval, demonic carnival as the backdrop, players are attacked by swarms of evil minions that refuse to die. Your only option to stay alive is to punch your attackers out of your screen and into an opponent’s. But then something unexpected happens.


Mirrors of Grimaldi’s dynamic split screen

As your character’s health waxes or wanes, your screen size expands or shrinks in proportion. This leads to an intriguing dynamic. Becoming weaker risks having the screen collapse around your character, ejecting you from the game. But, conversely, a smaller screen also makes it easier to punch an enemy minion into a neighboring screen, threatening one of your opponents. “We tried to develop gameplay mechanics that use varying screen sizes not just as a feature, but as the principal mechanic of the game,” explained Andrew Lichtsinn, producer of Mirrors of Grimaldi.

In effect, the screen becomes an integral component of the game–“friend or foe” as Lichtsinn describes it–directly affecting how players position themselves, and how they assess threats both on and off their individual screens.

The team that would become 51st and Fire originally coalesced towards the end of the spring term of 2015. After batting several ideas around throughout the summer, the team officially formed in September as part of their senior project in the Digital Media Program. Before reaching the IUGS, however, they had to first get past their fellow classmates. “Drexel hosts an internal competition every year preceding the Games Showcase,” explained Dr. Jichen Zhu, Assistant Professor in the Digital Media Program. Having competed the previous year as a junior, Lichtsinn made it a goal to reach the IUGS in 2016.

Initially consisting of a core group of six members, including a producer, art director, and programmers and artists, the team further reached out to animator Alison Friedlander, as well as programmer Alex Hollander from the College of Computing and Informatics. The team further consulted with an experienced sound designer. “It was a very interdisciplinary team,” noted Zhu.


Drexel’s 51st and Fire team members: (standing from left) Andrew Lichtsinn, Alison Friedlander, Patrick Bastian, Boyd Fox, Evan Freed, Tom Trahey (front, from left) Steven Yaffe and Alex Hollander. They are joined here by Dr Jichen Zhu (standing, far right)

Ideating Innovation

Ideation was key to developing the core ideas behind Mirrors of Grimaldi’s innovative gameplay. The process began with a series of brainstorming sessions, during which time the team developed more than a dozen roughly hewn game ideas, that they would then share with Zhu. Most failed to impress. But at one point, someone on the team proposed a split-screen approach that was quickly mocked up in Adobe* Photoshop*. “It wasn’t even close to how the game would eventually appear, but Professor Zhu reacted so positively that we knew we were onto something,” recalled Lichtsinn.

From the beginning, Zhu underscored the need to constantly evaluate project scope, especially given the relatively short development window available to the team. At the same time, the team was aware that adopting a split-screen scheme meant more than just creating interesting “visual eye candy,” as Lichtsinn described it. As the ideation evolved, Lichtsinn found everybody on the team contributing core concepts, and helping to develop the game organically.

For instance, one person came up with the idea of punching minions between screens, while someone else hit upon the notion that the minions should never expire. Other team members then suggested random global events, which automatically activate whenever screens haven’t fluctuated enough over a period of time. “The game was a conglomerate of a lot of brainstorming in front of a big whiteboard,” explained Lichtsinn.

Interestingly, Lichtsinn attributes the strength of the gameplay to the fact that there wasn’t a preconceived story idea or theme. “Professor Zhu repeatedly stressed initially keeping story out of the gameplay so we wouldn’t feel constrained,” recalled Lichtsinn. When it came time to craft the surrounding narrative, the idea of a demonic or creepy carnival was immediately popular with everybody on the team. From this, the hall of mirrors grew as a natural metaphor, as the four players essentially progress through mirrored, parallel environments.

In the early stages of the design process, the biggest hurdle turned out to be gesture controls. First and foremost, the team wanted the main action within the game—punching minions—to be an intuitive, fun, and challenging gesture, instead of a button-press. At the same time, they didn’t want a game that had a slippery slope when players start to lose. The solution was to have players charge a punch by rotating the stick, and then flicking in the direction of the punch; larger screens would then necessitate heftier punches with longer charging times. “We wanted players who were winning to have to work a bit harder to keep their lead,” explained Lichtsinn.

For the most part, ideation proceeded smoothly, with few disagreements. “We designed the essence of the game early enough that there wasn’t much disagreement about features,” recalled Lichtsinn. The more contentious questions were about the art, with several competing preferences. For instance, early in development, the team had the idea of using four different character styles, each representing a typical profession of the period. “We ended up having to cut that, because we simply didn't have enough time,” lamented Lichtsinn.

In many cases, final decisions were made either by Lichtsinn or art director Evan Freed. More complex decisions, however, went to a team vote—though that didn’t happen very often. The most notable instance was over the issue of whether the characters should carry weapons, or just punch with their fists. “That one had to go to a vote,” recalled Lichtsinn. “But since our team could fit into a small room, we never got gridlocked, or had to stop production because of competing ideas.”

Crafting With Unity

There was even less contention in the choice of development tools: the Drexel team selected the Unity* engine and used C# for all the programming. “For us, it was a pretty easy choice,” noted Lichtsinn. Not only was a free version of Unity available—Unreal* Engine didn’t offer a similar edition at the time—but since the programmers already had considerable experience with Unity and C#, they felt confident about hitting milestones on time. Moreover, Unity allowed the team to implement a feature and then play it in the editor without forcing a compile, significantly speeding development.

The team adopted an agile system with two-week sprints, and followed each full build with a comprehensive play-test. “We would open the door and invite passing students to try the game,” remembered Lichtsinn. The team also took advantage of a weekly meeting of gaming enthusiasts in Philadelphia called Philly Dev Night, which is affiliated with the Philly Game Forge community. “We collected a lot of data there about what people enjoyed, and what needed improvement,” noted Lichtsinn. “That informed our decisions about what to include in upcoming sprints.”


Mirrors of Grimaldi with four parallel environments

Early on, the team identified rendering as a potential roadblock. While most games only need to render a single camera view, Mirrors of Grimaldi essentially needed to draw four environments at the same time, all while maintaining an acceptable frame-rate. The solution was to narrow the range of textures to four atlases, allowing the game to only have to load four texture files, significantly enhancing performance. Another challenge involved the minion enemies. A strictly AI-driven approach across four environments risked overloading the CPU. Instead, the team adopted a commonly used strategy of pre-rendering much of the AI-based behavior when the minions were offscreen.

These solutions, coupled with custom graphic optimizations done entirely within the Unity framework, were all it took to make Mirrors of Grimaldi shine on the benchmark Intel® graphics-powered laptop. (These computers were supplied by Intel to all participants in the competition.) But the team didn’t stop there, going so far as to test against what Lichtsinn described simply as “a really old laptop” that was unearthed by a team member. “Almost everything we did was developed in-house, allowing our programmers to focus on optimization as needed,” explained Lichtsinn. “This allowed us to make sure that the game could run on as many platforms as possible, even on relatively old systems.”

During all this, time was an ever-present issue; both while readying for the internal competition and when approaching the IUGS. Initially, there was a mad dash to get as much art and features into the game as possible to show to the faculty. “But then, just as we were able to let out a sigh of relief for winning at Drexel, we realized that we had only three more weeks until we had to do it again at the IUGS,” recalled Lichtsinn.

The key to success was identifying the essential elements that showed Mirrors of Grimaldi’s truest value. “I think we definitely hit all those points,” summarized Lichtsinn. “The team really focused and re-focused, making sure the unique gameplay was always at the center of the entire experience,” added Zhu.

Taking It To The Next Level

For the Drexel team, winning the Best Gameplay category at the 2016 Intel University Games Showcase was deeply gratifying. “It meant a lot, and we were all pretty shocked when it happened,” said Lichtsinn. “We felt that we had come up with a great idea, but you never really know until you actually build the game, and see people playing it and having fun.”

The team continues to polish and enhance Mirrors of Grimaldi, and recently made it available on Steam* Greenlight. The plan is to complete final development in 2017, with new maps and game modes, among other features, for distribution in the Steam Store, pricing the game competitively to spur interest. At that point, based on feedback, the team could consider any of several options, including porting to other platforms such as Sony* Playstation* or Microsoft* Xbox*.

In the meantime, the team members take pleasure in knowing that they were able to come up with a stylish game that’s not only fun to play, but also brings something new to the genre. “Most everyone gets at least a little startled when they first see the dynamic split-screen starting to move. That’s great,” enthused Lichtsinn.

SIDEBAR

Tips From The Drexel Team For Pushing The Innovation Envelope

  • Encourage all team members, irrespective of role, to participate in the ideation process.
  • Think outside the box, literally and figuratively, when developing within a well-established genre.
  • When developing new features, consider how to make the enhancements integral to the overall mechanics and gameplay.
  • Games are products, and products have deadlines. Pay close attention to how new features affect the scope of the product.
  • Choose a development environment that matches your team’s skills, and is in sync with your development methodologies.
  • Don’t be afraid to try to startle and delight your audience.

 

Resources

Intel University Games Showcase 2016 Overview

Intel University Games Showcase 2016 Results

Drexel University Digital Media Program Web Site

Mirrors of Grimaldi Steam Greenlight page

51st and Fire Web Site

Unity Engine Web Site

Intel® Parallel Studio XE 2017 Composer Edition BETA Fortran - Debug Solutions Release Notes

$
0
0

This page provides the current Release Notes for the Debug Solutions from Intel® Parallel Studio XE 2017 Composer Edition BETA Update 1 for Fortran Linux*, Windows* and OS X* products.

To get product updates, log in to the Intel® Software Development Products Registration Center.

For questions or technical support, visit Intel® Software Products Support.

For the top-level Release Notes, visit:

Table of Contents:

Change History

This section highlights important from the previous product version and changes in product updates.

Changes since Intel® Parallel Studio XE 2017 Composer Edition BETA

  • Fortran Expression Evaluator (FEE):
    • Added support for displaying extended types that are parameterized
    • Added the ability in FEE to change the format in which a value is displayed in the debugger windows (e.g.: Watch, Immediate) by using format  specifiers (specified here). The supported format specifiers are "x","s","d","o","c","e","g", and "f".
    • Display of array dimensions in the "Value" column of the Watch and Locals views and in array tooltips.
    • Added support for viewing a particular element of a data structure across an array of such structures (e.g. students(1:100:1)%name). Value assignment to this type of expressions is not supported, though.
    • Modified to display character variable data that contain nulls. Editing of null containing string is disabled in the Watch and  Locals views. Editing in the memory window is still possible.

Changes since Intel® Parallel Studio XE 2016 Composer Edition

  • Simplified Eclipse* plug-in
  • Support for Intel® Xeon Phi™ coprocessor & processor X200 offload debugging
  • Shipping GNU* Project Debugger (GDB) 7.10 (except for Intel® Debugger for Heterogeneous Compute 2017)
  • Improved Fortran Variable Length Array support for GNU* Project Debugger

Product Contents

  • Linux*:
    • GNU* Project Debugger (GDB) 7.10:
      Command line for host CPU and Intel® Xeon Phi™ coprocessor, and Eclipse* IDE plugin for offload enabled applications.
  • OS X*:
    • GNU* Project Debugger (GDB) 7.10:
      Command line for CPU only.
  • Windows*:
    • Intel® Debugger Extension for Intel® Many Integrated Core Architecture (Intel® MIC Architecture)
    • Fortran Expression Evaluator (FEE) as extension to debugger of Microsoft Visual Studio* 

GNU* GDB

This section summarizes the changes, new features, customizations and known issues related to the GNU* GDB provided with Intel® Parallel Studio XE 2017 Composer Edition.
 

Features

GNU* GDB provided with Intel® Parallel Studio XE 2017 Composer Edition and above is based on GDB 7.10 with additional enhancements provided by Intel. This debugger replaces the Intel® Debugger from previous releases. In addition to features found in GDB 7.10, there are several other new features:
  • Intel® Processor Trace (Intel® PT) support for 5th generation Intel® Core™ Processors:
    (gdb) record btrace pt
  • Support for Intel® Many Integrated Core Architecture (Intel® MIC Architecture) of Intel® Xeon Phi™ coprocessor X100
  • Support for Intel® Xeon Phi™ coprocessor & processor X200
  • Support for Intel® Transactional Synchronization Extensions (Intel® TSX) (Linux & OSX)
  • Register support for Intel® Memory Protection Extensions (Intel® MPX) and Intel® Advanced Vector Extensions 512 (Intel® AVX-512)
  • Data Race Detection (pdbx):
    Detect and locate data races for applications threaded using POSIX* thread (pthread) or OpenMP* models
  • Branch Trace Store (btrace):
    Record branches taken in the execution flow to backtrack easily after events like crashes, signals, exceptions, etc.
All features are available for Linux*, but only Intel® TSX is supported for OS X*.
 

Using GNU* GDB

GNU* GDB provided with Intel® Parallel Studio XE 2017 Composer Edition comes in different versions:
  • IA-32/Intel® 64 debugger:
    Debug applications natively on IA-32 or Intel® 64 systems with gdb-ia on the command line.
    A standard Eclipse* IDE can be used for this as well if a graphical user interface is desired.
  • Intel® Xeon Phi™ coprocessor debugger (only for Linux*):
    Debug applications remotely on Intel® Xeon Phi™ coprocessor systems. The debugger will run on a host system and a debug agent (gdbserver) on the coprocessor.
    There are two options:
    • Use the command line version of the debugger with gdb-mic.
      This only works for native Intel® Xeon Phi™ coprocessor X100 applications. For Intel® Xeon Phi™ coprocessor & processor X200 use gdb-ia.
      A standard Eclipse* IDE can be used for this as well if a graphical user interface is desired.
    • Use an Eclipse* IDE plugin shipped with Intel® Parallel Studio XE 2017 Composer Edition.
      This works only for offload enabled Intel® Xeon Phi™ coprocessor applications. Instructions on how to use GNU* GDB can be found in the Documentation section.

Documentation

The documentation for the provided GNU* GDB can be found here:
<install-dir>/documentation_2017/en/debugger/gdb-ia/gdb.pdf<install-dir>/documentation_2017/en/debugger/gdb-mic/gdb.pdf<install-dir>/documentation_2017/en/debugger/ps2016/get_started.htm

The latter is available online as well:

Known Issues and Changes

Not found: libncurses.so.5

On some systems, using the GNU* GDB version that is provided by Intel, fails due to a missing libncurses.so.5 (e.g. Fedora 24 and 25). Please install the package ncurses-compat-libs which provides the missing library.

Not found: libtinfo.so.5

On some systems, using the GNU* GDB version that is provided by Intel, fails due to a missing libtinfo.so.5 (e.g. SLES 11 SP3). If a package for libtinfo is not available, the following workaround can be applied:

$ sudo ln -s <path>/libncurses.so.5.6 <path>/libtinfo.so.5

As <path>, use the location of the system's ncurses library.

Safely ending offload debug sessions

To avoid issues like orphan processes or stale debugger windows when ending offload applications, manually end the debugging session before the application is reaching its exit code. The following procedure is recommended for terminating a debug session:
  1. Manually stop a debug session before the application reaches the exit-code.
  2. When stopped, press the red stop button in the tool-bar in the Intel® MIC Architecture-side debugger first. This will end the offloaded part of the application.
  3. Next, do the same for the CPU-side debugger.
  4. The link between the two debuggers will be kept alive. The Intel® MIC Architecture-side debugger will stay connected to the debug agent and the application will remain loaded in the CPU-side debugger, including all breakpoints that have been set.
  5. At this point, both debugger windows can safely be closed.

Intel® MIC Architecture-side debugger asserts on setting source directories

Setting source directories in the GNU* GDB might lead to an assertion.
Resolution:
The assertion should not affect debugger operation. To avoid the assertion anyway, don’t use source directory settings. The debugger will prompt you to browse for files it cannot locate automatically.

Debugger and debugged application required to be located on local drive (OS X* only)

In order to use the provided GNU* GDB (gdb-ia), it has to be installed on a local drive. As such, the entire Intel® Parallel Studio XE 2017 package has to be installed locally. Any application that is being debugged needs to be located on a local drive as well. This is a general requirement that’s inherent to GNU GDB with OS X*.

Debugging Fortran applications with Eclipse* IDE plugin for Intel® Xeon Phi™ coprocessor

If the Eclipse* IDE plugin for the Intel® Xeon Phi™ coprocessor is used for debugging Fortran applications, evaluation of arrays in the locals window might be incorrect. The underlying CDT applies the C/C++ syntax with brackets to arrays to retrieve their contents. This does not work for Fortran.
Solution: Use a fully qualified Fortran expression to retrieve the contents of arrays (e.g. with array sections like array(1:10)).
 
This section summarizes new features and changes, usage and known issues related to the Intel® Debugger Extension. This debugger extension only supports code targeting Intel® Many Integrated Core Architecture (Intel® MIC Architecture).
 

Features

  • Support for both native Intel® Xeon Phi™ coprocessor applications and host applications with offload extensions
  • Debug multiple Intel® Xeon Phi™ coprocessors at the same time (with offload extension)

Using the Intel® Debugger Extension

The Intel® Debugger Extension is a plug-in for the Microsoft Visual Studio* IDE. It transparently enables debugging of projects defined by  that IDE. Applications for Intel® Xeon Phi™ coprocessors can be either loaded and executed or attached to. This extension supports debugging of offload enabled code, using:
  • Microsoft Visual Studio* 2012
  • Microsoft Visual Studio* 2013
  • Microsoft Visual Studio* 2015

Documentation

The full documentation for the Intel® Debugger Extension can be found here:
<install-dir>\documentation_2017\en\debugger\ps2017\get_started.htm

This is available online as well:

Known Issues and Limitations

  • Disassembly window cannot be scrolled outside of 1024 bytes from the starting address within an offload section.
  • Handling of exceptions from the Intel® MIC Architecture application is not supported.
  • Starting an Intel® MIC Architecture native application is not supported. You can attach to a currently running application, though.
  • The Thread Window in Microsoft Visual Studio* offers context menu actions to Freeze, Thaw and Rename threads. These context menu actions are not functional when the thread is on an Intel® Xeon Phi™ coprocessor.
  • Setting a breakpoint right before an offload section sets a breakpoint at the first statement of the offload section. This only is true if there is no statement for the host between set breakpoint and offload section. This is normal Microsoft Visual Studio* breakpoint behavior but might become more visible with interweaved code from host and Intel® Xeon Phi™ coprocessor. The superfluous breakpoint for the offload section can be manually disabled (or removed) if desired.
  • Only Intel® 64 applications containing offload sections can be debugged with the Intel® Debugger Extension for Intel® Many Integrated Core Architecture.
  • Stepping out of an offload section does not step back into the host code. It rather continues execution without stopping (unless another event occurs). This is intended behavior.
  • The functionality “Set Next Statement” is not working within an offload section.
  • If breakpoints have been set for an offload section in a project already, starting the debugger might show bound breakpoints without addresses. Those do not have an impact on functionality.
  • For offload sections, using breakpoints with the following conditions of hit counts do not work: “break when the hit count is equal to” and “break when the hit count is a multiple of”.
  • The following options in the Disassembly window do not work within offload sections: “Show Line Numbers”, “Show Symbol Names” and “Show Source Code”
  • Evaluating variables declared outside the offload section shows wrong values.
  • Please consult the Output (Debug) window for detailed reporting. It will name unimplemented features (see above) or provide additional information required to configuration problems in a debugging session. You can open the window in Microsoft Visual Studio* via menu Debug->Windows->Output.
  • When debugging an offload-enabled application and a variable assignment is entered in the Immediate Window, the debugger may hang if assignments read memory locations before writing to them (for example, x=x+1). Please do not use the Immediate Window for changing variable values for offload-enabled applications.
  • Depending on the debugger extensions provided by Intel, the behavior (for example, run control) and output (for example, disassembly) could differ from what is experienced with the Microsoft Visual Studio* debugger. This is because of the different debugging technologies implemented by each and should not have a significant impact to the debugging experience.

Fortran Expression Evaluator (FEE) for debugging Fortran applications with Microsoft Visual Studio*

Fortran Expression Evaluator (FEE) is a plug-in for Microsoft Visual Studio* that is installed with Intel® Visual Fortran Compiler. It extends the standard debugger in Microsoft Visual Studio* IDE by handling Fortran expressions. There is no other change in usability.

Known Issues and Limitations

Microsoft Visual Studio 2013 Shell* does not work

To enable FEE with Microsoft Visual Studio 2013 Shell, you need to move both files ForIntrinsics.dll and ForOps11.dll from:

<Program Files (x86) Directory>\Microsoft Visual Studio 12.0\Common7\IDE\Remote Debugger\x64

to:

<Program Files Directory>\Microsoft Visual Studio 12.0\Common7\IDE\Remote Debugger\x64

After that, restart your Microsoft Visual Studio 2013 Shell to use FEE. This will be fixed in a future update release.

Conditional breakpoints limited

Conditional breakpoints that contain expressions with allocatable variables are not supported for Microsoft Visual Studio 2012* or later.

Debugging might fail when only Microsoft Visual Studio 2013/2015* is installed

For some FEE functionality the Microsoft Visual Studio 2012* libraries are required. One solution is to install Microsoft Visual Studio 2012* in addition to Microsoft Visual Studio 2013/2015*. An alternative is to install the "Visual C++ Redistributable for Microsoft Visual Studio 2012 Update 4" found here.
If you installed Intel® Parallel Studio XE 2017 on a system without any Microsoft Visual Studio* version available, a Microsoft Visual Studio 2013* Shell (incl. libraries) will be installed. It might be that FEE does not work in that environment. Please install the redistributable package mentioned above in addition to enable FEE. A future update will solve this problem for the installation of the shell.

Debugging mixed language programs with Fortran does not work

To enable debugging Fortran code called from a .NET managed code application in Visual Studio 2012 or later, unset the following configuration:
Menu Tools->Options, under section Debugging->General, clear the "Managed C++ Compatibility Mode" or "Use Managed Compatibility Mode" check box

For any managed code application, one must also check the project property Debug > Enable unmanaged code debugging.

Native edit and continue

With Microsoft Visual Studio 2015*, Fortran debugging of mixed code applications is enabled if "native edit and continue" is enabled for the C/C++ part of the code. In earlier versions this is not supported.

FEE truncates entries in locals window

To increase debugging performance, the maximum number of locals queried by the debug engine is limited with Intel® Parallel Studio XE 2016 and later releases. If a location in the source code has more than that number of locals, they are truncated and a note is shown:

Note: Too many locals! For performance reasons the list got cut after 500 entries!

The threshold can be controlled via the environment variable FEE_MAX_LOCALS. Specify a positive value for the new threshold (default is 500). A value of -1 can be used to turn off truncation entirely (restores previous behavior) - but at the cost of slower debug state transitions. In order to take effect, Microsoft Visual Studio* needs to be restarted.

Problem with debugging C# applications

If Microsoft Visual Studio 2015* is used, debugging of C# applications might cause problems, i.e. evaluations like watches won't work.If you experience issues like that, try to enable "Managed Compatibility Mode". More details how to enable it can be found here:
http://blogs.msdn.com/b/visualstudioalm/archive/2013/10/16/switching-to-managed-compatibility-mode-in-visual-studio-2013.aspx

The problem is known and will be fixed with a future version.

Attributions

This product includes software developed at:

GDB – The GNU* Project Debugger

Copyright Free Software Foundation, Inc., 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.

This program is free software; you can redistribute it and/or modify it under the terms and conditions of the GNU General Public License, version 2, as published by the Free Software Foundation.

This program is distributed in the hope it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.

GNU* Free Documentation License

Version 1.3, 3 November 2008

 

Copyright © 2000, 2001, 2002, 2007, 2008 Free Software Foundation, Inc. <http://fsf.org/>

 

Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.

 

0. PREAMBLE

The purpose of this License is to make a manual, textbook, or other functional and useful document "free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others.

This License is a kind of "copyleft", which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software.

We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference.

 

1. APPLICABILITY AND DEFINITIONS

This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The "Document", below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as "you". You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law.

A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.

A "Secondary Section" is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.

The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none.

The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.

A "Transparent" copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not "Transparent" is called "Opaque".

Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only.

The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, "Title Page" means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text.

The "publisher" means any person or entity that distributes copies of the Document to the public.

A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as "Acknowledgements", "Dedications", "Endorsements", or "History".) To "Preserve the Title" of such a section when you modify the Document means that it remains a section "Entitled XYZ" according to this definition.

The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License.

 

2. VERBATIM COPYING

You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.

You may also lend copies, under the same conditions stated above, and you may publicly display copies.

 

3. COPYING IN QUANTITY

If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.

If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.

If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.

It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.

 

4. MODIFICATIONS

You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:

A. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission.

B. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement.

C. State on the Title page the name of the publisher of the Modified Version, as the publisher.

D. Preserve all the copyright notices of the Document.

E. Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.

F. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below.

G. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice.

H. Include an unaltered copy of this License.

I. Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled "History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence.

J. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission.

K. For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein.

L. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles.

M. Delete any section Entitled "Endorsements". Such a section may not be included in the Modified Version.

N. Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any Invariant Section.

O. Preserve any Warranty Disclaimers.

If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles.

You may add a section Entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties—for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.

You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.

The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.

 

5. COMBINING DOCUMENTS

You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers.

The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.

In the combination, you must combine any sections Entitled "History" in the various original documents, forming one section Entitled "History"; likewise combine any sections Entitled "Acknowledgements", and any sections Entitled "Dedications". You must delete all sections Entitled "Endorsements".

 

6. COLLECTIONS OF DOCUMENTS

You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.

You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.

 

7. AGGREGATION WITH INDEPENDENT WORKS

A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an "aggregate" if the copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document.

If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate.

 

8. TRANSLATION

Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail.

If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title.

 

9. TERMINATION

You may not copy, modify, sublicense, or distribute the Document except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense, or distribute it is void, and will automatically terminate your rights under this License.

 

However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.

Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.

Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, receipt of a copy of some or all of the same material does not give you any rights to use it.

 

10. FUTURE REVISIONS OF THIS LICENSE

The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/.

Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or any later version" applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation. If the Document specifies that a proxy can decide which future versions of this License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Document.

 

11. RELICENSING

"Massive Multiauthor Collaboration Site" (or "MMC Site") means any World Wide Web server that publishes copyrightable works and also provides prominent facilities for anybody to edit those works. A public wiki that anybody can edit is an example of such a server. A "Massive Multiauthor Collaboration" (or "MMC") contained in the site means any set of copyrightable works thus published on the MMC site.

"CC-BY-SA" means the Creative Commons Attribution-Share Alike 3.0 license published by Creative Commons Corporation, a not-for-profit corporation with a principal place of business in San Francisco, California, as well as future copyleft versions of that license published by that same organization.

"Incorporate" means to publish or republish a Document, in whole or in part, as part of another Document.

An MMC is "eligible for relicensing" if it is licensed under this License, and if all works that were first published under this License somewhere other than this MMC, and subsequently incorporated in whole or in part into the MMC, (1) had no cover texts or invariant sections, and (2) were thus incorporated prior to November 1, 2008.

The operator of an MMC Site may republish an MMC contained in the site under CC-BY-SA on the same site at any time before August 1, 2009, provided the MMC is eligible for relicensing.

 

Disclaimer and Legal Information

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT.
UNLESS OTHERWISE AGREED IN WRITING BY INTEL, THE INTEL PRODUCTS ARE NOT DESIGNED NOR INTENDED FOR ANY APPLICATION IN WHICH THE FAILURE OF THE INTEL PRODUCT COULD CREATE A SITUATION WHERE PERSONAL INJURY OR DEATH MAY OCCUR.
Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined." Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information.
The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.
Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order.
Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800-548-4725, or go to:  http://www.intel.com/design/literature.htm

Intel processor numbers are not a measure of performance. Processor numbers differentiate features within each processor family, not across different processor families. Go to:
http://www.intel.com/products/processor_number/

MPEG-1, MPEG-2, MPEG-4, H.261, H.263, H.264, MP3, DV, VC-1, MJPEG, AC3, AAC, G.711, G.722, G.722.1, G.722.2, AMRWB, Extended AMRWB (AMRWB+), G.167, G.168, G.169, G.723.1, G.726, G.728, G.729, G.729.1, GSM AMR, GSM FR are international standards promoted by ISO, IEC, ITU, ETSI, 3GPP and other organizations. Implementations of these standards, or the standard enabled platforms may require licenses from various entities, including Intel Corporation.

BunnyPeople, Celeron, Celeron Inside, Centrino, Centrino Inside, Cilk, Core Inside, i960, Intel, the Intel logo, Intel AppUp, Intel Atom, Intel Atom Inside, Intel Core, Intel Inside, Intel Inside logo, Intel NetBurst, Intel NetMerge, Intel NetStructure, Intel SingleDriver, Intel SpeedStep, Intel Sponsors of Tomorrow., the Intel Sponsors of Tomorrow. logo, Intel StrataFlash, Intel Viiv, Intel vPro, Intel XScale, InTru, the InTru logo, InTru soundmark, Itanium, Itanium Inside, MCS, MMX, Moblin, Pentium, Pentium Inside, skoool, the skoool logo, Sound Mark, The Journey Inside, vPro Inside, VTune, Xeon, and Xeon Inside are trademarks of Intel Corporation in the U.S. and other countries.

* Other names and brands may be claimed as the property of others.

Microsoft, Windows, Visual Studio, Visual C++, and the Windows logo are trademarks, or registered trademarks of Microsoft Corporation in the United States and/or other countries.

Java is a registered trademark of Oracle and/or its affiliates.

Copyright (C) 2008–2016, Intel Corporation. All rights reserved.

Intel® Parallel Studio XE 2017 Composer Edition BETA C++ - Debug Solutions Release Notes

$
0
0

This page provides the current Release Notes for the Debug Solutions from Intel® Parallel Studio XE 2017 Composer Edition BETA Update 1 for C++ Linux*, Windows* and OS X* products.

To get product updates, log in to the Intel® Software Development Products Registration Center.

For questions or technical support, visit Intel® Software Products Support.

For the top-level Release Notes, visit:

Table of Contents:

Change History

This section highlights important from the previous product version and changes in product updates.

Changes since Intel® Parallel Studio XE 2016 Composer Edition

  • Simplified Eclipse* plug-in
  • Support for Intel® Xeon Phi™ coprocessor & processor X200 offload debugging
  • Shipping GNU* Project Debugger (GDB) 7.10 (except for Intel® Debugger for Heterogeneous Compute 2017)

Product Contents

This section lists the individual Debug Solutions components for each supported host OS. Not all components are available for all host OSes.

  • Linux*:
    • GNU* Project Debugger (GDB) 7.10:
      Command line for host CPU and Intel® Xeon Phi™ coprocessor & processor, and Eclipse* IDE plugin for offload enabled applications.
    • Intel® Debugger for Heterogeneous Compute 2017
  • OS X*:
    • GNU* Project Debugger (GDB) 7.10:
      Command line for CPU only.
  • Windows*:
    • Intel® Debugger Extension for Intel® Many Integrated Core Architecture (Intel® MIC Architecture)

GNU* GDB

This section summarizes the changes, new features, customizations and known issues related to the GNU* GDB provided with Intel® Parallel Studio XE 2017 Composer Edition.
 

Features

GNU* GDB provided with Intel® Parallel Studio XE 2017 Composer Edition and above is based on GDB 7.10 with additional enhancements provided by Intel. This debugger replaces the Intel® Debugger from previous releases. In addition to features found in GDB 7.10, there are several other new features:
  • Intel® Processor Trace (Intel® PT) support for 5th generation Intel® Core™ Processors:
    (gdb) record btrace pt
  • Support for Intel® Many Integrated Core Architecture (Intel® MIC Architecture) of Intel® Xeon Phi™ coprocessor X100
  • Support for Intel® Xeon Phi™ coprocessor & processor X200
  • Support for Intel® Transactional Synchronization Extensions (Intel® TSX) (Linux & OSX)
  • Register support for Intel® Memory Protection Extensions (Intel® MPX) and Intel® Advanced Vector Extensions 512 (Intel® AVX-512)
  • Data Race Detection (pdbx):
    Detect and locate data races for applications threaded using POSIX* thread (pthread) or OpenMP* models
  • Branch Trace Store (btrace):
    Record branches taken in the execution flow to backtrack easily after events like crashes, signals, exceptions, etc.
  • Pointer Checker:
    Assist in finding pointer issues if compiled with Intel® C++ Compiler and having Pointer Checker feature enabled (see Intel® C++ Compiler documentation for more information)
  • Improved Intel® Cilk™ Plus Support:
    Serialized execution of Intel® Cilk™ Plus parallel applications can be turned on and off during a debug session using the following command:
    (gdb) set cilk-serialization [on|off]
All features are available for Linux*, but only Intel® TSX is supported for OS X*.
 

Using GNU* GDB

GNU* GDB provided with Intel® Parallel Studio XE 2017 Composer Edition comes in different versions:
  • IA-32/Intel® 64 debugger:
    Debug applications natively on IA-32 or Intel® 64 systems with gdb-ia on the command line.
    A standard Eclipse* IDE can be used for this as well if a graphical user interface is desired.
  • Intel® Xeon Phi™ coprocessor & processor debugger (only for Linux*):
    Debug applications remotely on Intel® Xeon Phi™ coprocessor systems. The debugger will run on a host system and a debug agent (gdbserver) on the coprocessor.
    There are two options:
    • Use the command line version of the debugger with gdb-mic.
      This only works for native Intel® Xeon Phi™ coprocessor X100 applications. For Intel® Xeon Phi™ coprocessor & processor X200 use gdb-ia.
      A standard Eclipse* IDE can be used for this as well if a graphical user interface is desired.
    • Use an Eclipse* IDE plugin shipped with Intel® Parallel Studio XE 2017 Composer Edition.
      This works only for offload enabled Intel® Xeon Phi™ coprocessor & processor applications. Instructions on how to use GNU* GDB can be found in the Documentation section.

Documentation

The documentation for the provided GNU* GDB can be found here:
<install-dir>/documentation_2017/en/debugger/gdb-ia/gdb.pdf<install-dir>/documentation_2017/en/debugger/gdb-mic/gdb.pdf<install-dir>/documentation_2017/en/debugger/ps2017/get_started.htm

The latter is available online as well:

Known Issues and Changes

Not found: libncurses.so.5

On some systems, using the GNU* GDB version that is provided by Intel, fails due to a missing libncurses.so.5 (e.g. Fedora 24 and 25). Please install the package ncurses-compat-libs which provides the missing library.

Not found: libtinfo.so.5

On some systems, using the GNU* GDB version that is provided by Intel, fails due to a missing libtinfo.so.5 (e.g. SLES 11 SP3). If a package for libtinfo is not available, the following workaround can be applied:

$ sudo ln -s <path>/libncurses.so.5.6 <path>/libtinfo.so.5

As <path>, use the location of the system's ncurses library.

Safely ending offload debug sessions

To avoid issues like orphan processes or stale debugger windows when ending offload applications, manually end the debugging session before the application is reaching its exit code. The following procedure is recommended for terminating a debug session:
  1. Manually stop a debug session before the application reaches the exit-code.
  2. When stopped, press the red stop button in the tool-bar in the Intel® MIC Architecture-side debugger first. This will end the offloaded part of the application.
  3. Next, do the same for the CPU-side debugger.
  4. The link between the two debuggers will be kept alive. The Intel® MIC Architecture-side debugger will stay connected to the debug agent and the application will remain loaded in the CPU-side debugger, including all breakpoints that have been set.
  5. At this point, both debugger windows can safely be closed.

Intel® MIC Architecture-side debugger asserts on setting source directories

Setting source directories in the GNU* GDB might lead to an assertion.
Resolution:
The assertion should not affect debugger operation. To avoid the assertion anyway, don’t use source directory settings. The debugger will prompt you to browse for files it cannot locate automatically.
 

Accessing _Cilk_shared variables in the debugger

Writing to a shared variable in an offloaded section from within the CPU-side debugger before the CPU-side debuggee has accessed that variable may result in loss of the written value/might display a wrong value or cause the application to crash.

Consider the following code snippet:

_Cilk_shared bool is_active;
_Cilk_shared my_target_func() {
  //Accessing “is_active” from the debugger *could* lead to unexpected
  // results e.g. a lost write or outdated data is read.
  is_active = true;
  // Accessing "is_active" (read or write) from the debugger at this
  // point is considered safe e.g. correct value is displayed.
}

Debugger and debugged application required to be located on local drive (OS X* only)

In order to use the provided GNU* GDB (gdb-ia), it has to be installed on a local drive. As such, the entire Intel® Parallel Studio XE 2017 package has to be installed locally. Any application that is being debugged needs to be located on a local drive as well. This is a general requirement that’s inherent to GNU GDB with OS X*.
 

Intel® Debugger for Heterogeneous Compute 2017

Features

The version of Intel® Debugger for Heterogeneous Compute 2017 provided as part of Intel® Parallel Studio XE 2017 Composer Edition uses GDB version 7.6. It provides the following features:

  • Debugging applications containing offload enabled code to Intel® Graphics Technology
  • Eclipse* IDE integration

The provided documentation (<install-dir>/documentation_2017/en/debugger/ps2017/get_started.htm) contains more information.

Requirements

For Intel® Debugger for Heterogeneous Compute 2017, the following is required:

  • Hardware
    • A dedicated host system is required as the target system will stop the GPU when debugging. Hence no more visual feedback is possible.
    • Network connection (TCP/IP) between host and target system.
    • 4th generation Intel® Core™ processor or later with Intel® Graphics Technology up to GT3 for the target system.
  • Software

Documentation

The documentation can be found here:
<install-dir>/documentation_2017/en/debugger/gdb-igfx/gdb.pdf<install-dir>/documentation_2017/en/debugger/ps2017/get_started.htm
 

Known Issues and Limitations

No call-stack

There is currently no provision for call-stack display. This will be addressed in future version of the debugger.

Un-interruptible threads

Due to hardware limitations it is not possible to interrupt a running thread. This may cause intermittent side-effects while debugging, where the debugger displays incorrect register and variable value for these threads. It might also show up as displaying SIGTRAP messages when breakpoints get removed while other threads are running.

Evaluation of expressions with side-effects

The debugger does not evaluate expressions that contain assignments which read memory locations before writing to them (e.g. x = x + 1). Please do not use such assignments when evaluating expressions.

 
This section summarizes new features and changes, usage and known issues related to the Intel® Debugger Extension. This debugger extension only supports code targeting Intel® Many Integrated Core Architecture (Intel® MIC Architecture).
 

Features

  • Support for both native Intel® Xeon Phi™ coprocessor applications and host applications with offload extensions
  • Debug multiple Intel® Xeon Phi™ coprocessors at the same time (with offload extension)

Using the Intel® Debugger Extension

The Intel® Debugger Extension is a plug-in for the Microsoft Visual Studio* IDE. It transparently enables debugging of projects defined by  that IDE. Applications for Intel® Xeon Phi™ coprocessors can be either loaded and executed or attached to. This extension supports debugging of offload enabled code, using:
  • Microsoft Visual Studio* 2012
  • Microsoft Visual Studio* 2013
  • Microsoft Visual Studio* 2015

Documentation

The full documentation for the Intel® Debugger Extension can be found here:
<install-dir>\documentation_2017\en\debugger\ps2017\get_started.htm

This is available online as well:

Known Issues and Limitations

  • Using conditional breakpoints for offload sections might stall the debugger. If aconditional breakpoint is created within an offload section, the debugger might hang when hitting it and evaluating the condition. This is currently analyzed and will be resolved in a future version of the product.
  • Data breakpoints are not yet supported within offload sections.
  • Disassembly window cannot be scrolled outside of 1024 bytes from the starting address within an offload section.
  • Handling of exceptions from the Intel® MIC Architecture application is not supported.
  • Changing breakpoints while the application is running does not work. The changes will appear to be in effect but they are not applied.
  • Starting an Intel® MIC Architecture native application is not supported. You can attach to a currently running application, though.
  • The Thread Window in Microsoft Visual Studio* offers context menu actions to Freeze, Thaw and Rename threads. These context menu actions are not functional when the thread is on an Intel® Xeon Phi™ coprocessor.
  • Setting a breakpoint right before an offload section sets a breakpoint at the first statement of the offload section. This only is true if there is no statement for the host between set breakpoint and offload section. This is normal Microsoft Visual Studio* breakpoint behavior but might become more visible with interweaved code from host and Intel® Xeon Phi™ coprocessor. The superfluous breakpoint for the offload section can be manually disabled (or removed) if desired.
  • Only Intel® 64 applications containing offload sections can be debugged with the Intel® Debugger Extension for Intel® Many Integrated Core Architecture.
  • Stepping out of an offload section does not step back into the host code. It rather continues execution without stopping (unless another event occurs). This is intended behavior.
  • The functionality “Set Next Statement” is not working within an offload section.
  • If breakpoints have been set for an offload section in a project already, starting the debugger might show bound breakpoints without addresses. Those do not have an impact on functionality.
  • For offload sections, setting breakpoints by address or within the Disassembly window won’t work.
  • For offload sections, using breakpoints with the following conditions of hit counts do not work: “break when the hit count is equal to” and “break when the hit count is a multiple of”.
  • The following options in the Disassembly window do not work within offload sections: “Show Line Numbers”, “Show Symbol Names” and “Show Source Code”
  • Evaluating variables declared outside the offload section shows wrong values.
  • Please consult the Output (Debug) window for detailed reporting. It will name unimplemented features (see above) or provide additional information required to configuration problems in a debugging session. You can open the window in Microsoft Visual Studio* via menu Debug->Windows->Output.
  • When debugging an offload-enabled application and a variable assignment is entered in the Immediate Window, the debugger may hang if assignments read memory locations before writing to them (for example, x=x+1). Please do not use the Immediate Window for changing variable values for offload-enabled applications.
  • Depending on the debugger extensions provided by Intel, the behavior (for example, run control) and output (for example, disassembly) could differ from what is experienced with the Microsoft Visual Studio* debugger. This is because of the different debugging technologies implemented by each and should not have a significant impact to the debugging experience.

Attributions

This product includes software developed at:

GDB – The GNU* Project Debugger

Copyright Free Software Foundation, Inc., 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.

This program is free software; you can redistribute it and/or modify it under the terms and conditions of the GNU General Public License, version 2, as published by the Free Software Foundation.

This program is distributed in the hope it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.

GNU* Free Documentation License

Version 1.3, 3 November 2008

 

Copyright © 2000, 2001, 2002, 2007, 2008 Free Software Foundation, Inc. <http://fsf.org/>

 

Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.

 

0. PREAMBLE

The purpose of this License is to make a manual, textbook, or other functional and useful document "free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others.

This License is a kind of "copyleft", which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software.

We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference.

 

1. APPLICABILITY AND DEFINITIONS

This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The "Document", below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as "you". You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law.

A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.

A "Secondary Section" is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.

The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none.

The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.

A "Transparent" copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not "Transparent" is called "Opaque".

Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only.

The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, "Title Page" means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text.

The "publisher" means any person or entity that distributes copies of the Document to the public.

A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as "Acknowledgements", "Dedications", "Endorsements", or "History".) To "Preserve the Title" of such a section when you modify the Document means that it remains a section "Entitled XYZ" according to this definition.

The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License.

 

2. VERBATIM COPYING

You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.

You may also lend copies, under the same conditions stated above, and you may publicly display copies.

 

3. COPYING IN QUANTITY

If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.

If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.

If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.

It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.

 

4. MODIFICATIONS

You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:

A. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission.

B. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement.

C. State on the Title page the name of the publisher of the Modified Version, as the publisher.

D. Preserve all the copyright notices of the Document.

E. Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.

F. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below.

G. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice.

H. Include an unaltered copy of this License.

I. Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled "History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence.

J. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission.

K. For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein.

L. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles.

M. Delete any section Entitled "Endorsements". Such a section may not be included in the Modified Version.

N. Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any Invariant Section.

O. Preserve any Warranty Disclaimers.

If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles.

You may add a section Entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties—for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.

You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.

The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.

 

5. COMBINING DOCUMENTS

You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers.

The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.

In the combination, you must combine any sections Entitled "History" in the various original documents, forming one section Entitled "History"; likewise combine any sections Entitled "Acknowledgements", and any sections Entitled "Dedications". You must delete all sections Entitled "Endorsements".

 

6. COLLECTIONS OF DOCUMENTS

You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.

You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.

 

7. AGGREGATION WITH INDEPENDENT WORKS

A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an "aggregate" if the copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document.

If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate.

 

8. TRANSLATION

Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail.

If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title.

 

9. TERMINATION

You may not copy, modify, sublicense, or distribute the Document except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense, or distribute it is void, and will automatically terminate your rights under this License.

 

However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.

Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.

Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, receipt of a copy of some or all of the same material does not give you any rights to use it.

 

10. FUTURE REVISIONS OF THIS LICENSE

The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/.

Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or any later version" applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation. If the Document specifies that a proxy can decide which future versions of this License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Document.

 

11. RELICENSING

"Massive Multiauthor Collaboration Site" (or "MMC Site") means any World Wide Web server that publishes copyrightable works and also provides prominent facilities for anybody to edit those works. A public wiki that anybody can edit is an example of such a server. A "Massive Multiauthor Collaboration" (or "MMC") contained in the site means any set of copyrightable works thus published on the MMC site.

"CC-BY-SA" means the Creative Commons Attribution-Share Alike 3.0 license published by Creative Commons Corporation, a not-for-profit corporation with a principal place of business in San Francisco, California, as well as future copyleft versions of that license published by that same organization.

"Incorporate" means to publish or republish a Document, in whole or in part, as part of another Document.

An MMC is "eligible for relicensing" if it is licensed under this License, and if all works that were first published under this License somewhere other than this MMC, and subsequently incorporated in whole or in part into the MMC, (1) had no cover texts or invariant sections, and (2) were thus incorporated prior to November 1, 2008.

The operator of an MMC Site may republish an MMC contained in the site under CC-BY-SA on the same site at any time before August 1, 2009, provided the MMC is eligible for relicensing.

 

Disclaimer and Legal Information

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT.
UNLESS OTHERWISE AGREED IN WRITING BY INTEL, THE INTEL PRODUCTS ARE NOT DESIGNED NOR INTENDED FOR ANY APPLICATION IN WHICH THE FAILURE OF THE INTEL PRODUCT COULD CREATE A SITUATION WHERE PERSONAL INJURY OR DEATH MAY OCCUR.
Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined." Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information.
The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.
Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order.
Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800-548-4725, or go to:  http://www.intel.com/design/literature.htm

Intel processor numbers are not a measure of performance. Processor numbers differentiate features within each processor family, not across different processor families. Go to:
http://www.intel.com/products/processor_number/

MPEG-1, MPEG-2, MPEG-4, H.261, H.263, H.264, MP3, DV, VC-1, MJPEG, AC3, AAC, G.711, G.722, G.722.1, G.722.2, AMRWB, Extended AMRWB (AMRWB+), G.167, G.168, G.169, G.723.1, G.726, G.728, G.729, G.729.1, GSM AMR, GSM FR are international standards promoted by ISO, IEC, ITU, ETSI, 3GPP and other organizations. Implementations of these standards, or the standard enabled platforms may require licenses from various entities, including Intel Corporation.

BunnyPeople, Celeron, Celeron Inside, Centrino, Centrino Inside, Cilk, Core Inside, i960, Intel, the Intel logo, Intel AppUp, Intel Atom, Intel Atom Inside, Intel Core, Intel Inside, Intel Inside logo, Intel NetBurst, Intel NetMerge, Intel NetStructure, Intel SingleDriver, Intel SpeedStep, Intel Sponsors of Tomorrow., the Intel Sponsors of Tomorrow. logo, Intel StrataFlash, Intel Viiv, Intel vPro, Intel XScale, InTru, the InTru logo, InTru soundmark, Itanium, Itanium Inside, MCS, MMX, Moblin, Pentium, Pentium Inside, skoool, the skoool logo, Sound Mark, The Journey Inside, vPro Inside, VTune, Xeon, and Xeon Inside are trademarks of Intel Corporation in the U.S. and other countries.

* Other names and brands may be claimed as the property of others.

Microsoft, Windows, Visual Studio, Visual C++, and the Windows logo are trademarks, or registered trademarks of Microsoft Corporation in the United States and/or other countries.

Java is a registered trademark of Oracle and/or its affiliates.

Copyright (C) 2008–2016, Intel Corporation. All rights reserved.

How to Debug Fortran Coarray Applications on Windows

$
0
0

When a Fortran coarray application is started under the Visual Studio* debugger on Windows, the current debug window does not have control over the images running the application. This article presents a method for getting debug control over a selected instance of the program ("image" in the language's terminology).

Each image is running in a separate process, but there is no external indication of which process corresponds to which image. The Visual Studio debugger has the ability to attach control to another process, but you need a way to know which process is which image and to have all the images wait until you have set the desired breakpoints and resume execution.

The attached source is a module called Coarray_Debugging, containing a subroutine called Enable_Coarray_Debugging. Save this source to your local disk and add it to your project as a source file.

Downloadapplication/octet-streamCoarray_Debugging_Mod.f90

At the beginning of your main program's source file, add the line:

use Coarray_Debugging

This goes after the PROGRAM statement (if any) and before any IMPLICIT statements or other specification statements. If you have other USE statements, it can go in with them in any order.

Next, before the first executable line in your program add the line:

call Enable_Coarray_Debugging

This subroutine makes all images wait until you indicate that they should proceed. Build your application. Before starting the program, open Coarray_Debugging_Mod.f90 and set a breakpoint on the line with the call to SLEEPQQ.

When you set the breakpoint by clicking in the gray bar to the left of the line, a red circle will appear. When the breakpoint is hit later, the circle will also contain a yellow arrow as it does in this image.

Start the program under the debugger (Debug > Start Debugging or press F5).

The program will call Enable_Coarray_Debugging which  will display in the Output pane a line for each image. Each line contains the image number, process name and process ID. For example:

In this example, the process name is "coarray_mcpi.exe" (the name of the executable). Image 1 is process ID 48764, image 2 is 47176, etc. We are going to debug image 2 which is process ID 47176.

In Visual Studio, select Debug > Attach to Process...

The Attach dialog will appear, listing all of the processes on your system. Scroll until you find the process ID that matches the one you want. In this case we want 47176.

Once the correct process has been selected, click the Attach button. You should now see the debugger stop at the breakpoint you set earlier.

At this point you can set breakpoints in other parts of the program as normal. Note that if a process is not currently attached to the debugger, it will not stop at breakpoints, but all the processes are waiting in the routine.

The Enable_Coarray_Debugging subroutine declares a scalar logical coarray variable Global_Debug_Flag that is initialized to .FALSE. in image 1. All images then enter a wait loop checking to see if image 1's Global_Debug_Flag is .TRUE.. In order to preserve system responsiveness, the loop contains a call to SLEEPQQ that waits 500 milliseconds before checking again.

A local variable Local_Debug_Flag is checked inside the loop. This variable is also initialized to .FALSE. Once you have set all the breakpoints, use the debugger's Locals window to change the value of Local_Debug_Flag to .TRUE..

Double-click on the .FALSE. value and change it to .TRUE. Then continue execution in the debugger (click Continue or press F5). 

All images will now resume execution. The image you attached to will stop at any breakpoints you have set, and you can examine local variables. Note that coarray variables may not display properly in the debugger and changing their value in the Locals window typically has no effect.

You can go back to the Attach to Process... dialog and attach to other image processes. If breakpoints have been set they will now stop in that process. If you have attached to multiple processes you can switch among them using the Process control towards the upper left:

We find, however, that switching among processes in a coarray application may cause unexpected errors and recommend against it.

If you have questions specifically about this article you can ask below. For all other questions, or for faster response, please ask in our user forum.

Sensor to Cloud: Connecting Intel® NUC and Arduino 101* to Microsoft Azure* IoT Hub

$
0
0

Introduction

This article will show you how to use an Intel® Next Unit Computing (NUC) device to connect sensors on an Arduino 101* board to the Microsoft Azure* IoT Hub. You learn how to read real-time sensor data from the Arduino 101 board, view it locally on the Intel® NUC, and send it to the Azure IoT Hub, where the data can be stored, visualized, and processed in the cloud. To do all this, you use Node‑RED* on the NUC to create processing flows that perform the input, processing, and output functions that drive your application.

Setup and Prerequisites

  • Intel® NUC connected to the Internet
  • Arduino 101 board connected to the Intel® NUC through USB
  • Seeed Studio Grove* Base Shield attached to the Arduino 101 board and switched to 3V3 VCC
  • Grove sensors connected to the Base Shield: light on A1, rotary encoder on A2, button on D4, green LED on D5, buzzer on D6, and relay on D7
  • An active Azure cloud account
  • The packagegroup-cloud-azure package installed on the Intel® NUC

Read Sensors and Display Data on the Intel® IoT Gateway Developer Hub

Log in to the Intel® NUC’s Intel® IoT Gateway Developer Hub by entering the Intel® NUC’s IP address in your browser and using gwuser as the default user name and password. You’ll see basic information about the Intel® NUC, including its model number, version, Ethernet address, and network connectivity status.

Click the Sensors icon, and then click Manage Sensors to open the Node‑RED canvas, where you’ll see Sheet 1 with a default flow for an RH-USB sensor. You won’t use the RH-USB sensor for this project, so drag a box around the entire flow and delete it. You’re left with a blank canvas.

Along the left side of the Node-RED screen, you see a series of nodes. These are the building blocks for creating a Node‑RED application on the Intel® NUC. For this application, you’ll use the nodes shown in Table 1.

Table 1. Nodes used in the sample application

Read button pressesOn/off LED indicator
Measure light levelFormat chart display on the Intel® NUC device
Measure rotary positionSend data to the Intel® NUC’s Message Queuing Telemetry Transport (MQTT) chart listener
Relay open/closedSend data the Azure IoT Hub

Drag nodes onto the canvas and arrange them as shown in Figure 1. You will need multiple copies of some of the nodes. Use your mouse to connect wires between the nodes as shown.

Note:You’ll use the azureiothub node later; don’t include it now.

Figure 1. Arranging nodes on the Node‑RED canvas

When you first place nodes on the canvas, they are in a default state. You must configure them before they’ll work. To do so, double-click them, and then set parameters in their configuration panels.

Double-click each node on the canvas and set its parameters as shown in Table 2. In some cases, the Name field is left blank (it uses the default name of the node). Pin numbers correspond to the Base Shield jack to the sensor or actuator is connected.

Table 2. Nodes and their parameters

Node

Parameters

Grove Button

Platform: Firmata, Pin: D4, Interval (ms): 1000

Grove Light

Platform: Firmata, Pin: A1, Unit: Raw Value, Interval (ms): 1000

Grove Rotary

Platform: Firmata, Pin: A2, Unit: Absolute Raw, Interval (ms): 1000

Grove LED

Platform: Firmata, Pin: D5, Mode: Output

Grove Relay (upper)

Platform: Firmata, Pin: D7

Grove Relay (lower)

Name: Grove Buzzer, Platform: Firmata, Pin: D6 (you use this node to control the buzzer)

chart tag connected to Grove Button

Title: Button, Type: Status Text

chart tag connected to Grove Light

Title: Light, Type: Gauge, Units: RAW

chart tag connected to Grove Rotary

Title: Rotary, Type: Gauge, Units: RAW

mqtt

Server: localhost:1883, Topic: /sensors, Name: Charts

 

Verify your settings and wiring connections, and then click Deploy to deploy your changes and make them active on the Intel® NUC. After deploying the flow, you should see a data display toward the top of the Intel® IoT Gateway Developer Hub, with live values for Rotary, Light, and Button (Figure 2). Turning the rotary knob and covering the light sensor should make the numbers change up and down; pressing the button should turn on the LED, sound the buzzer, and energize the relay.

Figure 2. The deployed Intel® NUC in the Intel® IoT Gateway Developer Hub

Create the Microsoft Azure* IoT Hub

Before you can send sensor data to the Azure IoT Hub, you must create an Azure IoT Hub in your Azure cloud account. Log in to Azure, and then navigate to the Dashboard. To create an Azure IoT Hub, follow these steps:

  1. Click New > Internet of Things > IoT Hub.
  2. Set the parameters to match Table 3.

    Table 3. Parameters for your Microsoft Azure IoT Hub

    Name

    iothub-3982

    Your Azure IoT Hub name must be unique within Azure. Try different names until you find one that’s available.

    Pricing and scale tier

    F1 - Free

    Use the free tier for this application.

    Resource group

    MyIOT

    Create a new group.

    Subscription

    Pay-As-You-Go

     

    Location

    East US

    Pick a location in your geographic region.

  3. Select Pin to dashboard, and then click Create. Azure IoT Hub is deployed to your Azure account and appears on your Dashboard after a few minutes.
  4. After it has been deployed, find the iothubowner Connection string--primary key, which is a text string that you’ll need later.
  5. Click Dashboard > iothub-3982 > Settings > Shared access policies > iothubowner, and look for Connection string--primary key under Shared access keys. The string is complex, so copy it for use in the next step.

Create a Device Identity in Azure IoT Hub

Before a device can communicate with Azure IoT Hub, it must have a device identity in the Azure IoT Hub Device Identity Registry, which is a list of devices authorized to interact with your Azure IoT Hub instance.

You create and manage device identities through Representational State Transfer (REST) application programming interfaces (APIs) that Azure IoT Hub provides. There are different ways to use the REST APIs; in this guide, you’ll use an Azure open source command-line tool called iothub-explorer, which is available on GitHub. iothub-explorer is a Node.js* application, so you need Node.js 4.x or later installed on your computer to use it.

Use these shell commands to install iothub-explorer on your computer and create a device identity for the Intel® NUC. Make sure you that have the iothubowner Connection string--primary key string (found earlier) ready to paste:

Install the program:npm install -g iothub-explorer
Verify it runs:iothub-explorer help

Next, create and register a new device named intelnuc using the iothubowner Connection string--primary key you copied earlier. Run this shell command using your own iothubowner Connection string--primary key string inside the quotation marks:

iothub-explorer "HostName=iothub-3982.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=Be0w9Zew0909LLAKeiseXsdf/adfe239EODo9iwee9w=" create intelnuc --connection-string

When the device has been created, you’ll see the message “Created device intelnuc” and a list of parameters for the newly created device. Locate the connectionString parameter, which will have a long text string value next to it that starts with HostName=. You’ll copy and paste this <device> connectionString value in the next task.

Send Data to Azure IoT Hub

Next, add an Azure IoT Hub output node to the Intel® NUC’s Node‑RED flow to send your data to Azure IoT Hub. Complete the following steps:

  1. In the IoT Gateway Developer Hub, drag an azureiothub output node onto the canvas.
    When the node is dropped onto the canvas, its name changes to Azure IoT Hub.
  2. Connect a wire from the output of Grove Rotary to the input of Azure IoT Hub.
  3. Double-click the Azure IoT Hub node, and set the following parameters:
    Name: Azure IoT Hub
    Protocol: amqp
  4. For the Connection String property, paste in the <device> connectionString text string value you copied from the output of iothub-explorer. Make sure that Protocol remains set to amqp. After pasting the connectionString value, the node’s configuration panel should look like Figure 3.

    Figure 3. Parameters for the azureiothub node

  5. Click Ok, and then click Deploy to deploy your updated flow to the Intel® NUC.

At this point, the data values for your Grove Rotary sensor should be flowing to Azure IoT Hub once per second, which is the rate set in the Grove Rotary node. To view the transmission events in Azure IoT Hub, go to your Azure cloud account and navigate to Dashboard > iothub-3982. Look at the Usage tile. The number of transmission messages should be increasing at a rate corresponding to one message per second (Figure 4).

Figure 4. The number of transmission messages should increase by one per second.

If no messages are flowing, you’ll see 0 messages and 0 devices in the Usage tile. To view the actual event messages using iothub-explorer, run the following shell command using your own iothubowner Connection string--primary key:

iothub-explorer "HostName=iothub-3982.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=Be0w9Zew0909LLAKeiseXsdf/adfe239EODo9iwee9w=" monitor-events intelnuc

Note: When you’re done testing this application, be sure to stop your Node‑RED flow (for example, by turning off the Intel® NUC or removing the wire between the Grove Rotary sensor and Azure IoT Hub and redeploying the flow) to preserve the remaining messages in your Free Tier allotment for the Azure IoT Hub instance. Otherwise, the Node‑RED application will consume them as it continues to run.

Where to Go From Here

This application provides a foundation for connecting your Arduino 101 board and Intel® NUC to Azure IoT Hub. From here, you would typically wire up other sensors and send their data to Azure IoT Hub, then build more complex applications that listen to Azure IoT Hub messages and store, process, and visualize the sensor data.

Additional Reading:

Login to leave a comment below. If you are not registered go to the Intel® Developer Zone to sign up.

OpenCL™ Drivers and Runtimes for Intel® Architecture

$
0
0

Some of the OpenCL* Drivers and Runtimes are provided as part of:

 

Packages Available

Installation of a relevant runtime or driver enables OpenCL applications to run on a target hardware set.

By downloading a package from this page, you accept the End User License Agreement.

CPU+GPU+SDK

Also available in Intel® Media Server Studio 

 

GPU Driver Packages

CPU-only Runtime-only  

Deprecated 

 


Intel® SDK for OpenCL™ Applications 2016 R2 for Linux* (64 bit)

This is a standalone release for customers who do not need integration with the Intel® Media Server Studio (MSS).  It provides all components needed to run and compile OpenCL applications. 

Visit https://software.intel.com/en-us/intel-opencl to download the version for your platform. For details check out the Release Notes.

Intel® SDK for OpenCL™ Applications 2016 R2 for Windows* (64 bit)

This is a standalone release for customers who do not need integration with the Intel® Media Server Studio (MSS).  The Windows* graphics driver contains the driver and runtime library components necessary to run OpenCL applications. This package provides components for OpenCL development. 

Visit https://software.intel.com/en-us/intel-opencl to download the version for your platform. For details check out Release Notes.


OpenCL™ 2.0 Driver for Intel® HD, Iris™, and Iris™ Pro Graphics for Linux* (64-bit)

The intel-opencl-2.0-2.0 driver for Linux is an intermediate release preceding  Intel® SDK for OpenCL™ Applications 2016 R2 for Linux*.  It provides access to the general-purpose, parallel compute capabilities of Intel® graphics for OpenCL applications as a standalone package. 

Intel has validated the intel-opencl-2.0-2.0 driver on CentOS 7.2 for the following 64-bit kernels.

  • Linux 4.4 kernel patched for OpenCL 2.0

Supported OpenCL devices:

  • Intel Graphics (GPU)

For detailed information please see the Release Notes.


OpenCL™ Driver for Intel® Iris™ and Intel® HD Graphics for Windows* OS (64-bit and 32-bit)

The Intel® Graphics driver includes components needed to run OpenCL* and Intel® Media SDK applications on processors with Intel® Iris™ Graphics or Intel® HD Graphics on Windows* OS.

You can use the Intel Driver Update Utility to automatically detect and update your drivers and software.  Using the latest available graphics driver for your processor is usually recommended.


See also Identifying your Intel® Graphics Controller.

Supported OpenCL devices:

  • Intel Graphics (GPU)
  • CPU

For the full list of Intel® Architecture processors with OpenCL support on Intel Graphics under Windows*, refer to the Release Notes.

 

 


OpenCL™ Runtime for Intel® Core™ and Intel® Xeon® Processors

This runtime software package adds OpenCL CPU device support on systems with Intel Core and Intel Xeon processors.

Supported OpenCL devices:

  • CPU

Latest release (16.1.1)

Previous Runtimes (16.1)

Previous Runtimes (15.1):

For the full list of supported Intel Architecture processors, refer to the OpenCL™ Runtime Release Notes.

 


 Deprecated Releases

Note: These releases are no longer maintained or supported by Intel

OpenCL™ Runtime 14.2 for Intel® CPU and Intel® Xeon Phi™ Coprocessors

This runtime software package adds OpenCL support to Intel Core and Xeon processors and Intel Xeon Phi coprocessors.

Supported OpenCL devices:

  • Intel Xeon Phi Coprocessor
  • CPU

Available Runtimes

For the full list of supported Intel Architecture processors, refer to the OpenCL™ Runtime Release Notes.


Deploying IoT Devices with Wind River* Helix* Device Cloud

$
0
0

This article is part of a five-part series. Be sure to see the others:

Deploying IoT Devices with Wind River* Helix* Device Cloud

Wind River* Helix* Device Cloud enables remote management and monitoring of your Internet of Things (IoT) edge devices and gateways, scaling to thousands of devices. This article explores the benefits of Device Cloud and describes some of the capabilities it provides for deployed devices.

Introducing Wind River* Helix* Device Cloud

Device Cloud is a cloud-based service designed to manage and monitor IoT devices at large scales. Building an IoT device can be a challenge in itself, but attempting to manage thousands or hundreds of thousands of these deployed devices can be an uphill battle.

Device Cloud takes care of many of the common management and monitoring tasks associated with IoT deployments, including securely capturing and monitoring data from disparate devices, supporting mass configuration, and even firmware updates. All these capabilities come through a single, web-based console.

As Figure 1 shows, Device Cloud monitors and manages devices over the Internet and exposes a web-based console and Representational State Transfer (REST)-ful application programming interfaces (APIs) to enterprise applications to provide full access to deployed devices around the world.

Internet of Things management with Wind River* Helix* Device Cloud

Figure 1. Internet of Things management with Wind River* Helix* Device Cloud [Source: Created and drawn by author; all other images from the Wind River* Dev Cloud website.]

Let’s jump into Device Cloud so that you can see how to view devices, monitor them, and even create rules to create alerts automatically when other rules are satisfied.

Scaling your IoT deployment

IoT devices generate data and require management, configuration, and monitoring, all at new scales. Because of these scales, new requirements such as autonomous operation (removing the human in the loop) arise. Using Device Cloud, you can easily reduce the complexities of dealing with large numbers of devices, all through a cloud-based platform.

Device Cloud enables the management of devices (end points through which data can be collected and actuators manipulated). Each device includes an embedded software agent that cooperates with Device Cloud for a variety of actions. Figure 2 shows those devices that are currently being managed and can be selected for a standard or custom action.

Viewing devices in the Wind River* Helix* Device Cloud console

Figure 2. Viewing devices in the Wind River* Helix* Device Cloud console

In Figure 2, you can see a scrollable list of devices that have been registered in Device Cloud, with some identifying information: the unique ID for the device, its media access control (MAC) address, and the operating system running on the device. Near the upper-right corner is a search bar that dynamically filters the list of devices based on search criteria that you provide. As shown in Figure 3, I’ve indicated a portion of the MAC address to view only those devices that match this criterion. This is an important feature: If you want to update a given set of devices (using the UPDATES list), the search feature allows you to restrict the set of devices based on your needs.

Viewing a subset of devices in the Wind River* Helix* Device Cloud console

Figure 3. Viewing a subset of devices in the Wind River* Helix* Device Cloud console

As your deployment of IoT devices increases, managing and monitoring these devices from a single management console simplifies what could be a growing burden.

Secure management and monitoring

Now, let’s dig further into Device Cloud to view a sensor, and then create a custom rule to generate an alert.

On the DEVICES tab, I’ve refined the devices list through the search criteria using the MAC address (see Figure 4). The first device shown is Graham’s Galileo, which is an Arduino*-certified development board based on the Intel x86 architecture.

Viewing another subset of devices

Figure 4. Viewing another subset of devices

I’ve drawn a blue bracket around this name, which is also a link. If I click this link, I’ll be taken to a new page that allows me to focus actions on this device and learn more about its capabilities (see Figure 5).

Viewing details of a device

Figure 5. Viewing details of a device

In the CUSTOM ACTIONS list, you can see three tabs. I’m currently on the Details tab. This tab contains useful information such as the unique device ID, actions available on this device, and the available sensors. If I click the Telemetry Data tab, I’ll be taken to a new page that allows me to view these sensors and their data timeline (see Figure 6). Note that the “light” sensor was the default, so Device Cloud loads this data immediately.

Viewing sensor data from a device

Figure 6. Viewing sensor data from a device

This sensor emits telemetry data on a scale from 0 to 1000; as you can see, the light sensor data varies over the course of a day. This graph is interactive: By moving the mouse over the graph, you can generate more detailed data for each point displayed (the exact time of the collected data and exact value). You can also modify the graph to restrict the time of the data to view more or less of the available sensor data. In Figure 7, I’ve modified the timeframe to the last 30 minutes of data. Note in the bottom right the newly selected data range.

Viewing the most recent sensor data

Figure 7. Viewing the most recent sensor data

Interacting with a device

Device Cloud allows you to interact with a device at many levels. These actions cover a variety of standard and custom-defined activities that are specific to the device in question.

Let’s return to the example device and look at some of these. Under STANDARD ACTIONS, you can restart, shut down, or reset the device, and send and receive files to or from the device. The CUSTOM ACTIONS list allows you to perform more specific actions on the device in question (see Figure 8). From Figure 8, you can see actions to enable or disable the LED, dump the log files from the device, transfer files, disable the device, and even open a remote shell to the device.

Performing custom actions on the device

Figure 8. Performing custom actions on the device

On the UPDATES tab, you can upload update files or request an update of the selected devices.

Monitoring device properties with rules

Now, let’s look at how you can monitor a device in real time. For this example, I do this by creating a rule. When I click RULES at the top of the page, Device Cloud takes me to a new page that shows the currently created rules for the collection of devices (see Figure 9).

Viewing the current rules

Figure 9. Viewing the current rules

I click CREATE NEW RULE and am taken to a page that allows me to define my rule. In this example, the rule I create generates an alert if the temperature exceeds a value. As shown in Figure 10, I’ve named my rule Temperature Rule and selected the device of interest.

Creating a new rule

Figure 10. Creating a new rule

Scrolling down the page below the devices list, there’s a section where I can define a condition, including the sensor name (temp), an operator (is greater than), and a value (35). For the rule response, I’ve requested to create an alert to notify me of the condition when it occurs (CREATE AN ALERT) called Temperature Alert. Finally, I click SAVE AND ENABLE at the bottom of the page to save my rule and enable it (Figure 11).

Completing the creation of a new rule

Figure 11. Completing the creation of a new rule

Sometime later, the rule condition is satisfied on this device, and an alert is generated for my Temperature Alert. The page also shows my created rule and its status (see Figure 12).

A triggered rule condition resulting in an alert

Figure 12. A triggered rule condition resulting in an alert

Data security

One of the most important attributes of any IoT deployment is security. Wind River Helix incorporates end-to-end security that covers all levels of the IoT stack, including user, network, application, and of course data security.

Helix Cloud uses Secure Sockets Layer (SSL) encryption to ensure that data travelling over the Internet is not visible to third parties. Certificate-based authentication is also used to prevent spoofing or impersonation of authorized agents.

Key benefits of Device Cloud

Device Cloud makes it easy to build device management capabilities into your IoT ecosystem. Using Device Cloud, you can

  • Collect and manage data from a scalable number of disparate devices
  • Protect data in flight against third-party access
  • Manage a scalable number of devices from a single web console
  • Create rules to check device data and generate alerts automatically
  • Access data and control it from your enterprise with RESTful APIs

Summary

Device Cloud is a ready-to-use cloud platform for the management and monitoring of IoT deployments. Using an onboard software agent (Wind River*–provided software that interfaces with Device Cloud) within your IoT products, Device Cloud can securely connect to gather data and access management interfaces for complex tasks like firmware upgrades.

Within Device Cloud, you can manage your entire IoT device population (one at a time or many devices) from a single web console and even create rules to validate gathered data and send alerts when user-defined conditions occur. You can view data from your devices and scale the generated graphs over any timeframe. You can also create rules that automatically check your data based on simple or complex conditions, and then generate alerts to notify you when they occur.

Most importantly, Device Cloud lets you focus on your IoT application while offloading data collection and overall management. As a cloud-based service, Device Cloud can scale easily from one to many devices.

IoT Gateways and the Wind River* Helix* Cloud

$
0
0

This article is the last of a five-part series. Be sure to see the others:

IoT Gateways and the Wind River* Helix* Cloud

Wind River* Helix* Cloud is an interconnected set of cloud-enabled services that work seamlessly together. Internet of Things (IoT) gateways use these capabilities for development, testing, and deployment. This article explores these IoT gateways and shows how they benefit from Helix Cloud. An IoT gateway is a common device in IoT infrastructures, and includes a variety of interfaces for connectivity to low-level devices and Internet-based protocols for communication with the cloud.

Developing, testing, and deploying in the cloud

Helix Cloud is a cloud-based platform that supports the firmware development life cycle (see Figure 1). It supports firmware development with Wind River* Helix* App Cloud, using a web-based integrated development environment (IDE), debugging and testing with Wind River* Helix*Lab Cloud (from a web console), and deployment with Wind River* Helix* Device Cloud (through a single web console or from your own enterprise IT environment through Representational State Transfer [REST]-full application programming interfaces).

Connectivity between services in Wind River* Helix* Cloud

Figure 1. Connectivity between services in Wind River* Helix* Cloud

These cloud-based services cover a variety of processor architectures, from various flavors of Intel® x86 processors to PowerPC and ARM*. This broad range of processor architectures supports the development of simple and complex edge devices as well as IoT gateways.

Another useful aspect of Helix Cloud is the connectivity between services. From App Cloud, you can develop firmware and select a session on which to execute your code (from Lab Cloud). You can access Lab Cloud directly to create and connect to virtual devices (called sessions) for firmware debugging and testing. From an integration standpoint, from Lab Cloud you can select a virtual session, and then bring up App Cloud on that session to execute firmware.

Engineering a change in the IoT

Helix Cloud represents a significant change in the development of IoT products. Rather than focus on one aspect of the production of IoT devices, Helix Cloud covers the entire life cycle of product development, from building firmware to validating devices to deploying and managing devices in the field.

Wind River also provides components that you can use for integration into the overall Helix Cloud. One example is the embedded software agent that’s deployed within IoT gateways. This agent implements standard protocols for monitoring and management of the IoT gateway in the field in a secure and standards-based fashion.

Increasing speed, lowering cost, and reducing risk

Helix Cloud provides many benefits for the development of IoT components such as edge devices and gateways. You can access App Cloud, as a web-based IDE, from any modern web browser, which means that you can develop anywhere you have an Internet connection. Because the IDE is stateful, when you exit the browser from work, and then bring it back up when you arrive at home, the IDE is just where you left it.

When using Lab Cloud for testing, it’s easy to create a virtual device, and then debug your embedded application. Another benefit of Lab Cloud is its ability to take snapshots of running sessions. The snapshot is the state of the session at a given time. From a test point of view, snapshots make it easy to zero in on a bug. Then, by sharing the snapshot, you can watch the issue in action, more quickly identify the root cause, and resolve it, reducing cost and increasing time to market.

For a gateway application, you can easily take your portable source and compile and test on a variety of gateway devices and configurations within Lab Cloud. This parallelized testing makes it faster to find issues and pinpoint them on specific hardware or device configurations. When your testing is complete, you can integrate the image that App Cloud creates into your physical gateway.

Device Cloud simplifies your ability to scale your gateway application with Wind River’s cloud-based infrastructure. When using Wind River’s embedded agent, your IoT gateway includes everything it needs for secure data transfer and manageability, such as data capture, configuration, and rules-based data analysis (Figure 2). Device Cloud allows you to focus on your IoT application, leaving standard capabilities like firmware update and IoT communication protocols that integrate cleanly into Device Cloud.

The Wind River* Helix* Device Cloud agent

Figure 2. The Wind River* Helix* Device Cloud agent

IoT and the gateway

As the name implies, the IoT gateway is a conduit within an IoT ecosystem for data collection and transmission as well as manageability. As a standard device in an IoT environment, many off-the-shelf parts and components are available for use to simplify your development.

The physical hardware for an IoT gateway is a specialized set of processor cores and interfaces that provide access to sensors and actuators. Intel provides off-the-shelf hardware that satisfies these needs across target IoT markets, including single- and multicore processors, to meet your specific gateway application requirements. These devices also include several communications and connectivity options, such as Wi-Fi*, Bluetooth*, cellular interfaces, serial ports, and USB. These options provide everything you need to access sensors and communicate your collected data beyond the gateway and into the cloud.

The gateway and the cloud

An IoT gateway is a bridge that accesses a variety of buses and interfaces to the edge devices that lie beyond, for the purposes of data collection and manipulation (if actuators are present). Data communication and controls are exposed to the cloud through higher-bandwidth interfaces such as Ethernet or Wi-Fi. The gateway is an integral part of any IoT ecosystem, forming a hierarchy. Rather than the cloud knowing about every low-level device that exists, the gateway mediates between the cloud and edge devices. In addition to sensor and actuator access, the IoT gateway includes functions necessary for management and security.

As the world becomes more connected through the IoT, the need to protect these devices against unauthorized access and secure their data increases. The gateway is an ideal place for this security because edge devices tend to be less powerful, and in many cases lack the ability to connect directly to the cloud. The gateway fills this niche, exposing the heavier-weight protocols such as Message Queuing Telemetry Transport (MQTT) to the cloud with enough processing capability to gather and process sensor data and control devices that can be manipulated.

Summary

Helix Cloud is an ideal cloud-based platform for IoT gateway development, testing, management, and monitoring. In addition to edge device firmware development, you can develop for a range of IoT gateway processor platforms, including multicore designs, using App Cloud and Lab Cloud. Device Cloud incorporates the gateway as a standard part of the IoT ecosystem, with protocol support focused on gateway functionality such as MQTT. Helix Cloud also extends functionality to the gateway through an embedded agent that integrates the gateway for manageability and security. From IoT product development to a production-ready management and monitoring environment, Helix Cloud covers you over the entire product life cycle.

Understanding Wind River* Helix* Cloud

$
0
0

This article is part of a five-part series. Be sure to see the others:

Understanding Wind River* Helix* Cloud

Building applications for the Internet of Things is a complicated endeavor. Your development can span a variety of disciplines, including embedded firmware development, sensors and actuators, networking protocols, server-side application development, security, and manageability. That's a broad spectrum of engineering specializations, which could restrict IoT products to larger companies with dedicated teams.

Thankfully, Wind River* has constructed a portfolio of IoT services that allow IoT developers to focus on their applications instead of the plethora of required enabling technologies. This IoT portfolio is made up of three key services and a number of technologies that enable fast time to market.

A Tour of the Helix IoT Ecosystem

Wind River's Helix Cloud is composed of a cooperating set of services that cover the lifecycle of IoT product development, test and deployment (see Figure 1). The App Cloud provides a cloud-based IDE for the development of firmware for IoT devices, including collaboration between two or more developers. This firmware can be tested on physical or virtual devices using the Lab Cloud, scaling to your testing needs. Finally, when your IoT device is ready to be deployed, the Device Cloud can be used for secure management and data transfer as well as overall device monitoring as your network of devices scales.

 

Figure 1. The Helix IoT ecosystem. (IMAGE SOURCE: Author created.)

Let's explore each of these services, and how they can be used to accelerate your IoT product development.

Helix* App Cloud

The App Cloud is a fully-featured development environment, built in the context of a web browser. This cloud-based development environment makes IoT firmware development possible anywhere there's a connection to the Internet. Figure 2 shows it running in the Google Chrome browser.

Figure 2. Helix* App Cloud browser-based IDE. (IMAGE SOURCE: Author Captured Helix App Cloud Web Site Screenshot)

As shown in Figure 2, the App Cloud is exposed as a fully-featured IDE in a web browser, with four panes. On the left is a file pane, showing the files that make up the workspace. At the top middle is the editor, showing a test application (C source file); tabs like the active one make multi-file editing simple and efficient. The right pane shows the debugger (currently executing the test application) with a list of control icons (for pausing execution and single-step debugging), the list of threads that are currently executing, the call stack, and an empty list of breakpoints. Finally, the bottom, middle pane contains a number of tabs for output, including the build tab and the output tab for the currently running application. Where this application executes is a topic for the next section.

The power of the App Cloud is not just that you can access an IDE through a web browser anywhere there's an Internet connection. The power is in the IDE's connectivity to other services, such as the Lab Cloud.

Helix* Lab Cloud

While the App Cloud provides the means to build your IoT application, you'll need hardware (virtual or otherwise) to execute it. This is where the Lab Cloud comes in. The Helix* Lab Cloud is a cloud-based environment for debugging and testing your IoT applications. Using the Lab Cloud, you can attach to physical hardware resources (like the Intel® Edison board) or create and attach to virtual hardware resources. One virtual option is the QEMU x86 Intel® Quark™ simulator, which behaves identically to a physical processor. Your IoT application runs with complete transparency on the virtual platform as if it were a physical one.

As a cloud-based service, the Lab Cloud also enables collaboration with other users, sharing physical and lower-cost virtual resources. Once your IoT application has been validated by one or more devices in the Lab Cloud, you can deploy your application to your target device of choice.

From Figure 2, you can see a device from the Lab Cloud used for debugging the test application. In the upper right corner, you'll see green text with the name “TestDevice.” This is a virtual Intel® Quark x86 device that is online, as indicated by the green bubble. The debugger pane on the right is attached to this virtual device, as is the output pane connection (middle bottom) receiving console output from the application running on the virtual device.

Figure 3 shows the web page for the Lab Cloud, and a short list of the platforms that are supported (hardware and RTOS).

Figure 3. Helix Lab Cloud browser interface. (IMAGE SOURCE: Author Captured Helix Lab Cloud Website Screenshot)

So far, we've seen how to develop and build our IoT application, as well as test it on physical or virtual devices. Now let's explore the deployment of our IoT devices, and how we can securely manage them in the field.

Helix* Device Cloud

The scale of IoT deployments can stress the traditional methods for device management. Managing a handful of devices is a trivial task, but once the set expands to hundreds, or hundreds of thousands of devices, more scalable methods are necessary.

The Device Cloud is a cloud-based platform that can scale to your particular IoT deployment. It provides you with key capabilities, such as the ability to provision new devices, monitor their operation, manage them for configuration, securely capture data into the cloud, and decommission devices at the end of their lifecycle.

The Device Cloud enables secure migration of data from your IoT devices into the cloud using end-to-end encryption for subsequent storage and analytics. It allows the development of rules for incoming telemetry data that can automatically trigger actions such as fault conditions or properties of the device. The Device Cloud also provides manageability and a number of other features, including firmware updates and integration with edge-device RTOS’s (such as Wind River* Linux*) along with health monitoring of devices using RESTful APIs. Additionally, the ability to configure devices in the field is enabled through RESTful APIs. The Device Cloud creates a seamless integration from the edge device into the cloud, allowing you to focus on developing your application, and reducing your overall time-to-market.

Key benefits of the Helix* Cloud

  • Accelerate your development with the App Cloud through anywhere and anytime access
  • Provide instant test resource access to your team with the Lab Cloud and build new virtual device configurations on the fly
  • Scale your IoT device deployment with the Device Cloud to provision, configure, monitor, and service
  • Leverage Wind River* embedded operating systems (such as Rocket* and Pulsar*) with full integration with Helix* Cloud services.
  • Leverage Intel® hardware designs for OS integration and seamless connectivity to the Helix Cloud services.

Summary

The Helix* Cloud provides you with an integrated set of services that cover the product lifecycle, from development and test, to deployment and analytics/monitoring. The Helix Cloud solves the common problems you'll encounter with device management, such as secure data delivery and robust management, allowing you to focus on your IoT application.

Additional reading

Log in to leave a comment below. If you are not registered, go to the Intel® Developer Zone to sign up.

Jumpstart your IoT Innovation - Intel System Studio 2016 for Microcontrollers Update 1 is Now Available

$
0
0

What’s New: Support for Intel® Quark™ SE Microcontroller C1000 and Intel® Curie Module

We just released Intel® System Studio 2016 for Microcontrollers Update 1. What’s cool about it is additional support for the upcoming Intel® Quark™ SE Microcontroller C1000, and the already available Intel® Curie Module or Arduino/Genuino 101. Developers can use the updated Intel System Studio for Microcontrollers tool suite to create amazing “Things” on these Intel Quark microcontroller platforms.

Listed below are just a few of this release’s top new capabilities. 

  • Support for Intel® Quark™ SE microcontroller C1000, and the Arduino/Genuino* 101 board with Intel Curie module

  • Support for Zephyr project* RTOS and code samples to jumpstart your development

  • Updated Intel® Quark™ Microcontroller software interface and code samples to make your development easier

  • New Intel® C Compiler for Microcontrollers (LLVM-based) optimized for resource constraint environments and performance

  • More optimized Intel® Integrated Performance Primitives for Microcontroller library functions for digital signal processing (DSP).

  • Simplified IDE workflow to make it even easier to start your IoT development

Download Now:  Linux*   Windows* 

Learn more at Intel® System Studio for Microcontrollers site. 

Code Sample: Close Call Reporter in Java*, (How-to Intel® IoT Technology Series)

$
0
0

Introduction

This close call fleet driving reporter application is part of a series of how-to Intel® IoT code sample exercises using the Intel® IoT Developer Kit, Intel® Edison development platform, cloud platforms, APIs, and other technologies.

From this exercise, developers will learn how to:

  • Connect the Intel® Edison development platform, a computing platform designed for prototyping and producing IoT and wearable computing products.
  • Interface with the Intel® Edison platform IO and sensor repository using MRAA and UPM from the Intel® IoT Developer Kit, a complete hardware and software solution to help developers explore the IoT and implement innovative projects.
  • Run this code sample in Intel® XDK IoT Edition, an IDE for creating applications that interact with sensors and actuators, enabling a quick start for developing software for the Intel® Edison or Intel® Galileo board.
  •  Store the close-call data using Azure Redis Cache* from Microsoft Azure*, Redis Store* from IBM Bluemix*, or ElastiCache* using Redis* from Amazon Web Services* (AWS), different cloud services for connecting IoT solutions including data analysis, machine learning, and a variety of productivity tools to simplify the process of connecting your sensors to the cloud and getting your IoT project up and running quickly.

What it is

Using an Intel® Edison board, this project lets you create a close-call fleet driving reporter that:

  • monitors the Grove* IR Distance Interrupter.
  • monitors the Grove GPS.
  • keeps track of close calls and logs them using cloud-based data storage.

How it works

This close-call reporter system monitors the direction the Grove* IR Distance Interrupter is pointed to.

It also keeps track of the GPS position of the Intel® Edison board, updating the position frequently to ensure accurate data.

If a close call is detected (that is, the Grove IR Distance Interrupter is tripped), the Intel® Edison board, if configured, notifies the Intel® IoT Examples Data store running in your own Microsoft Azure* account.

Hardware requirements

Grove* Starter Kit Plus containing:

Grove Transportation & Safety Kit containing:

  1. Intel® Edison platform with an Arduino breakout board
  2. Grove IR Distance Interrupter (http://iotdk.intel.com/docs/master/upm/node/classes/rfr359f.html)
  3. Grove GPS (http://iotdk.intel.com/docs/master/upm/node/classes/ublox6.html)

Software requirements

  1. Intel® XDK IoT Edition
  2. Microsoft Azure*, IBM Bluemix*, or AWS account (optional)

How to set up

To begin, clone the How-To Intel® IoT Code Samples repository with Git* on your computer as follows:

$ git clone https://github.com/intel-iot-devkit/how-to-code-samples.git

Want to download a .zip file? In your web browser, go to https://github.com/intel-iot-devkit/how-to-code-samples and click the Download ZIP button at the lower right. Once the .zip file is downloaded, uncompress it, and then use the files in the directory for this example.

Adding the program to Intel® System Studio IoT Edition

** The following screenshots are from the Alarm clock sample, however the technique for adding the program is the same, just with different source files and jars.

Open Intel® System Studio IoT Edition, it will start by asking for a workspace directory. Choose one and then click OK.

In Intel® System Studio IoT Edition , select File -> new -> Intel(R) IoT Java Project:

Give the project the name "CloseCallReporter" and then click Next.

You now need to connect to your Intel® Edison board from your computer to send code to it. Choose a name for the connection and enter the IP address of the Intel® Edison board in the "Target Name" field. You can also try to Search for it using the "Search Target" button. Click finish when you are done.

You have successfully created an empty project. You now need to copy the source files and the config file to the project. Drag all of the files from your git repository's "src" folder into the new project's src folder in Intel® System Studio IoT Edition. Make sure previously auto-generated main class is overridden.

The project uses the following external jars: gson-2.6.1. These can be found in the Maven Central Repository. Create a "jars" folder in the project's root directory, and copy all needed jars in this folder. In Intel® System Studio IoT Edition, select all jar files in "jars" folder and right click -> Build path -> Add to build path

Now you need to add the UPM jar files relevant to this specific sample. right click on the project's root -> Build path -> Configure build path. Java Build Path -> 'Libraries' tab -> click on "add external JARs..."

for this sample you will need the following jars:

  1. upm_ublox6.jar
  2. upm_rfr359f.jar

The jars can be found at the IOT Devkit installation root path\iss-iot-win\devkit-x86\sysroots\i586-poky-linux\usr\lib\java

Connecting the Grove* sensors

You need to have a Grove* Shield connected to an Arduino*-compatible breakout board to plug all the Grove devices into the Grove Shield. Make sure you have the tiny VCC switch on the Grove Shield set to 5V.

  1. Plug one end of a Grove cable into the Grove IR Distance Interrupter, and connect the other end to the D2 port on the Grove Shield.
  2. Plug one end of a Grove cable into the Grove GPS, and connect the other end to the UART port on the Grove Shield.

Data store server setup

Optionally, you can store the data generated by this sample program in a backend database deployed using Microsoft Azure*, IBM Bluemix*, or AWS*, along with Node.js*, and a Redis* data store.

For information on how to set up your own cloud data server, go to:

https://github.com/intel-iot-devkit/intel-iot-examples-datastore

Configuring the example

To configure the example for the optional data store, change the SERVER and AUTH_TOKEN keys in the config.properties file to the server URL and authentication token that correspond to your own data store server setup. For example:

  SERVER=http://mySite.azurewebsites.net/logger/close-call-reporter
  AUTH_TOKEN=myPassword

Preparing the Intel® Edison board before running the project

In order for the sample to run you will need to copy some files to the Intel® Edison board. This can be done using SCP through SSH:

The files need to be copied from the sample repository: 
Jar files- external libraries in the project need to be copied to "/usr/lib/java"

Running the program using Intel® System Studio IoT Edition

When you're ready to run the example, make sure you saved all the files.

Click the Run icon on the toolbar of Intel® System Studio IoT Edition. This runs the code on the Intel® Edison board.

Determining the IP address of the Intel® Edison board

You can determine what IP address the Intel® Edison board is connected to by running the following command:

ip addr show | grep wlan

You will see the output similar to the following:

3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    inet 192.168.1.13/24 brd 192.168.1.255 scope global wlan0

The IP address is shown next to inet. In the example above, the IP address is 192.168.1.13.

IMPORTANT NOTICE: This software is sample software. It is not designed or intended for use in any medical, life-saving or life-sustaining systems, transportation systems, nuclear systems, or for any other mission-critical application in which the failure of the system could lead to critical injury or death. The software may not be fully tested and may contain bugs or errors; it may not be intended or suitable for commercial release. No regulatory approvals for the software have been obtained, and therefore software may not be certified for use in certain countries or environments.

Viewing all 1201 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>