Quantcast
Channel: Intel Developer Zone Articles
Viewing all 1201 articles
Browse latest View live

Code Samples: BLE Scan Bracelet in Java*, (How-to Intel® IoT Technology Series)

$
0
0

Introduction

This Bluetooth* Low Energy (BLE) scan bracelet application is part of a series of how-to Intel® Internet of Things (IoT) code sample exercises using the Intel® IoT Developer Kit, Intel® Edison development platform, cloud platforms, APIs, and other technologies.

From this exercise, developers will learn how to:

  • Connect the Intel® Edison development platform, a computing platform designed for prototyping and producing IoT and wearable computing products.
  • Interface with the Intel® Edison platform IO and sensor repository using MRAA and UPM from the Intel® IoT Developer Kit, a complete hardware and software solution to help developers explore the IoT and implement innovative projects.
  • Run this code sample in Intel® System Studio IoT Edition. Intel® System Studio IoT Edition lets you create and test applications on Intel®-based IoT platforms.
  • Store detected BLE devices using Azure Redis Cache* from Microsoft Azure*, Redis Store* from IBM Bluemix*, or ElastiCache* using Redis* from Amazon Web Services* (AWS), different cloud services for connecting IoT solutions including data analysis, machine learning, and a variety of productivity tools to simplify the process of connecting your sensors to the cloud and getting your IoT project up and running quickly.

What it is

Using an Intel® Edison board, this project lets you create a BLE scan bracelet that:

  • searches for BLE devices that come within its scanning range.
  • displays information about detected devices using the OLED display.
  • keeps track of detected devices, using cloud-based data storage.

How it works

This BLE scanner bracelet uses a Xadow* expansion board for the Intel® Edison platform and the OLED display included in the Xadow kit.

With these components, we'll make a simple BLE scanner that displays information on the OLED display when BLE-equipped devices enter or exit its scanning range.

Optionally, all data can be stored using the Intel® IoT Examples Data store running in your own Microsoft Azure* account.

Hardware requirements

Xadow* Starter Kit containing:

  1. Intel® Edison platform with a Xadow* expansion board
  2. Xadow* - OLED display (http://iotdk.intel.com/docs/master/upm/node/classes/ssd1308.html)

Software requirements

  1. Intel® System Studio IoT Edition
  2.  Microsoft Azure*, IBM Bluemix*, or AWS account (optional)

How to set up

To begin, clone the How-To Intel IoT Code Samples repository with Git* on your computer as follows:

$ git clone https://github.com/intel-iot-devkit/how-to-code-samples.git

To download a .zip file, in your web browser, go to https://github.com/intel-iot-devkit/how-to-code-samples and click the Download ZIP button at the lower right. Once the .zip file is downloaded, uncompress it, and then use the files in the directory for this example.

Adding the program to Intel System Studio IoT Edition

** The following screenshots are from the Alarm clock sample, however the technique for adding the program is the same, just with different source files and jars.

Open Intel® System Studio IoT Edition, it will start by asking for a workspace directory. Choose one and then click OK.

In Intel® System Studio IoT Edition , select File -> new -> Intel(R) IoT Java Project:

Give the project the name "BleScanBracelet" and then click Next.

You now need to connect to your Intel® Edison board from your computer to send code to it. Choose a name for the connection and enter the IP address of the Intel® Edison board in the "Target Name" field. You can also try to Search for it using the "Search Target" button. Click finish when you are done.

You have successfully created an empty project. You now need to copy the source files and the config file to the project. Drag all of the files from your git repository's "src" folder into the new project's src folder in Intel® System Studio IoT Edition. Make sure previously auto-generated main class is overridden.

The project uses the following external jars: gson-2.6.1, joda-time-2.9.2. These can be found in the Maven Central Repository. Create a "jars" folder in the project's root directory, and copy all needed jars in this folder. In Intel® System Studio IoT Edition, select all jar files in "jars" folder and right click -> Build path -> Add to build path

Now you need to add the UPM jar files relevant to this specific sample. right click on the project's root -> Build path -> Configure build path. Java Build Path -> 'Libraries' tab -> click on "add external JARs..."

for this sample you will need the following jars:

  1. upm_i2clcd.jar
  2. tinyb.jar

The jars can be found at the IOT Devkit installation root path\iss-iot-win\devkit-x86\sysroots\i586-poky-linux\usr\lib\java

Set up the Intel® Edison board for BLE development

To set up the Intel® Edison board for BLE, run the following command:

rfkill unblock bluetooth

Connecting the Grove* sensors

You need to have a Xadow* expansion board connected to the Intel® Edison board to plug in all the Xadow devices.

Plug one end of a Xadow connector into the Xadow OLED, and connect the other end to one of the side connectors on the Xadow* expansion board.

Data store server setup

Optionally, you can store the data generated by this sample program in a backend database deployed using Microsoft Azure*, IBM Bluemix*, or AWS*, along with Node.js*, and a Redis* data store.

For information on how to set up your own cloud data server, go to:

https://github.com/intel-iot-devkit/intel-iot-examples-datastore

Configuring the example

To configure the example for the optional data store, change the SERVER and AUTH_TOKEN keys in the config.properties file to the server URL and authentication token that correspond to your own data store server setup. For example:

  SERVER=http://mySite.azurewebsites.net/logger/fire-alarm
  AUTH_TOKEN=myPassword

Preparing the Intel® Edison board before running the project

In order for the sample to run you will need to copy some files to the Intel® Edison board. This can be done using SCP through SSH. The files need to be copied from the sample repository: 

Jar files- external libraries in the project need to be copied to "/usr/lib/java"

Running the program using Intel® System Studio IoT Edition

When you're ready to run the example, make sure you saved all the files.

Click the Run icon on the toolbar of Intel® System Studio IoT Edition. This runs the code on the Intel® Edison board.

Determining the IP address of the Intel® Edison board

You can determine what IP address the Intel® Edison board is connected to by running the following command:

ip addr show | grep wlan

You will see the output similar to the following:

3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    inet 192.168.1.13/24 brd 192.168.1.255 scope global wlan0

The IP address is shown next to inet. In the example above, the IP address is 192.168.1.13.

IMPORTANT NOTICE: This software is sample software. It is not designed or intended for use in any medical, life-saving or life-sustaining systems, transportation systems, nuclear systems, or for any other mission-critical application in which the failure of the system could lead to critical injury or death. The software may not be fully tested and may contain bugs or errors; it may not be intended or suitable for commercial release. No regulatory approvals for the software have been obtained, and therefore software may not be certified for use in certain countries or environments.


Code Sample: Alarm Clock in Java*, , (How-to Intel® IoT Technology Series)

$
0
0

Introduction

This smart alarm clock application is part of a series of how-to Intel® Internet of Things (IoT) code sample exercises using the Intel® IoT Developer Kit, Intel® Edison development platform, cloud platforms, APIs, and other technologies.

From this exercise, developers will learn how to:

  • Connect the Intel® Edison development platform, a computing platform designed for prototyping and producing IoT and wearable computing products.
  • Interface with the Intel® Edison platform IO and sensor repository using MRAA and UPM from the Intel® IoT Developer Kit, a complete hardware and software solution to help developers explore the IoT and implement innovative projects.
  • Run this code sample in Intel® System Studio IoT Edition. Intel® System Studio lets you create and test applications on Intel-based IoT platforms.
  • Set up a web application server to set the alarm time and store this alarm data using Azure Redis Cache* from Microsoft Azure*, Redis Store* from IBM Bluemix*, or ElastiCache* using Redis* from Amazon Web Services* (AWS), different cloud services for connecting IoT solutions including data analysis, machine learning, and a variety of productivity tools to simplify the process of connecting your sensors to the cloud and getting your IoT project up and running quickly.
  • Invoke the services of the Weather Underground* API for accessing weather data.

What it is

Using an Intel® Edison board, this project lets you create a smart alarm clock that:

  • can be accessed with your mobile phone via the built-in web interface to set the alarm time.
  • displays live weather data on the LCD.
  • keeps track of how long it takes you to wake up each morning, using cloud-based data storage.

How it works

This smart alarm clock has a number of useful features. Set the alarm using a web page served directly from the Intel® Edison board, using your mobile phone. When the alarm goes off, the buzzer sounds, and the LCD indicates it’s time to get up. The rotary dial can be used to adjust the brightness of the display.

In addition, the smart alarm clock can access daily weather data via the Weather Underground* API and use it to change the color of the LCD. Optionally, all data can also be stored using the Intel IoT Examples Data store running in your own Microsoft Azure* account.

Hardware requirements

Grove* Starter Kit Plus containing:

Software requirements

  1. Intel® System Studio IoT Edition
  2. MMicrosoft Azure*, IBM Bluemix*, or AWS account (optional)
  3. Weather Underground* API key

How to set up

To begin, clone the How-To Intel IoT Code Samples repository with Git* on your computer as follows:

$ git clone https://github.com/intel-iot-devkit/how-to-code-samples.git

To download a .zip file, in your web browser go to https://github.com/intel-iot-devkit/how-to-code-samples and click the Download ZIP button at the lower right. Once the .zip file is downloaded, uncompress it, and then use the files in the directory for this example.

Adding the program to Intel® System Studio IoT Edition

** The following screenshots are from the Alarm clock sample, however the technique for adding the program is the same, just with different source files and jars.

Open Intel® System Studio IoT Edition, it will start by asking for a workspace directory. Choose one and then click OK.

In Intel® System Studio IoT Edition , select File -> new -> Intel(R) IoT Java Project:

Give the project the name "AlarmClock" and then click Next.

You now need to connect to your Intel® Edison board from your computer to send code to it. Choose a name for the connection and enter IP address of the Intel® Edison board in the "Target Name" field. You can also try to Search for it using the "Search Target" button. Click finish when you are done.

You have successfully created an empty project. You now need to copy the source files and the config file to the project. Drag all of the files from your git repository's "src" folder into the new project's src folder in Intel® System Studio IoT Edition. Make sure previously auto-generated main class is overridden.

The project uses the following external jars: gson-2.6.1, jetty-all-9.3.7.v20160115-uber, joda-time-2.9.2. These can be found in the Maven Central Repository. Create a "jars" folder in the project's root directory, and copy all needed jars in this folder. In Intel® System Studio IoT Edition, select all jar files in "jars" folder and right click -> Build path -> Add to build path

Now you need to add the UPM jar files relevant to this specific sample. right click on the project's root -> Build path -> Configure build path. Java Build Path -> 'Libraries' tab -> click on "add external JARs..."

for this sample you will need the following jars:

  1. upm_buzzer.jar
  2. upm_grove.jar
  3. upm_i2clcd.jar

The jars can be found at the IOT Devkit installation root path\iss-iot-win\devkit-x86\sysroots\i586-poky-linux\usr\lib\java

Connecting the Grove* sensors

You need to have a Grove* Shield connected to an Arduino*-compatible breakout board to plug all the Grove devices into the Grove Shield. Make sure the tiny VCC switch on the Grove Shield set to 5V.

  1. Plug one end of a Grove cable into the Grove Rotary Analog Sensor, and then connect the other end to the A0 port on the Grove Shield.
  2. Plug one end of a Grove cable into the Grove Button, and then connect the other end to the D4 port on the Grove Shield.
  3. Plug one end of a Grove cable into the Grove Buzzer, and then connect the other end to the D5 port on the Grove Shield.
  4. Plug one end of a Grove cable into the Grove RGB LCD, and then connect the other end to any of the I2C ports on the Grove Shield.

Weather Underground* API key

To optionally fetch the real-time weather information, you need to get an API key from the Weather Underground* website:

http://www.wunderground.com/weather/api

You cannot retrieve weather conditions without obtaining a Weather Underground API key first. You can still run the example, but without the weather data.

Pass your Weather Underground API key to the sample program by modifying the WEATHER_API_KEY key in the config.properties file as follows:

  WEATHER_API_KEY="YOURAPIKEY"

Data store server setup

Optionally, you can store the data generated by this sample program in a backend database deployed using Microsoft Azure*, IBM Bluemix*, or AWS*, along with Node.js*, and a Redis* data store.

For information on how to set up your own cloud data server, go to:

https://github.com/intel-iot-devkit/intel-iot-examples-datastore

Configuring the example

To configure the example for the optional data store, change the SERVER and AUTH_TOKEN keys in the config.properties file to the server URL and authentication token that correspond to your own data store server setup. For example:

  WEATHER_API_KEY: "YOURAPIKEY"
  LOCATION: "San_Francisco"

To configure the example for the optional Microsoft Azure* data store, change the SERVER and AUTH_TOKEN keys in the config.properties file as follows:

  SERVER: "http://intel-examples.azurewebsites.net/logger/alarm-clock"
  AUTH_TOKEN: "s3cr3t"

To configure the example for both the weather data and the Microsoft Azure* data store, change the WEATHER_API_KEY, LOCATION, SERVER, and AUTH_TOKEN keys in the config.properties file as follows:

  WEATHER_API_KEY: "YOURAPIKEY"
  LOCATION: "San_Francisco"
  SERVER: "http://intel-examples.azurewebsites.net/logger/alarm-clock"
  AUTH_TOKEN: "s3cr3t"

Preparing the Intel® Edison board before running the project

In order for the sample to run you will need to copy some files to the Intel® Edison board. This can be done using SCP through SSH.

Two sorts of files need to be copied from the sample repository:

  1. Jar files- external libraries in the project need to be copied to "/usr/lib/java"
  2. web files- files within site_contents folder need to be copied to "/var/AlarmClock"


Running the program using Intel® System Studio IoT Edition

When you're ready to run the example, make sure you have saved all the files.

Click the Run icon on the toolbar of Intel® System Studio IoT Edition. This runs the code on the Intel Edison board.

You will see output similar to the following when the program is running.

Setting the alarm

The alarm is set using a single-page web interface served directly from the Intel® Edison board while the sample program is running.

The web server runs on port 8080, so if the Intel® Edison board is connected to Wi-Fi* on 192.168.1.13, the address to browse to if you are on the same network is http://192.168.1.13:8080.

Determining the IP address of Intel Edison board

You can determine what IP address Intel® Edison board is connected to by running the following command:

ip addr show | grep wlan

You will see output similar to the following:

3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    inet 192.168.1.13/24 brd 192.168.1.255 scope global wlan0

The IP address is shown next to inet. In the example above, the IP address is 192.168.1.13.

IMPORTANT NOTICE: This software is sample software. It is not designed or intended for use in any medical, life-saving or life-sustaining systems, transportation systems, nuclear systems, or for any other mission-critical application in which the failure of the system could lead to critical injury or death. The software may not be fully tested and may contain bugs or errors; it may not be intended or suitable for commercial release. No regulatory approvals for the software have been obtained, and therefore software may not be certified for use in certain countries or environments.

Code Sample: Air Quality Sensor In Java*, (How-to Intel® IoT Technology Series)

$
0
0

Introduction

This air quality monitor application is part of a series of how-to Intel® Internet of Things (IoT) code sample exercises using the Intel® IoT Developer Kit, Intel® Edison development platform, cloud platforms, APIs, and other technologies.

From this exercise, developers will learn how to:

  • Connect the Intel® Edison development platform, a computing platform designed for prototyping and producing IoT and wearable computing products.
  • Interface with the Intel® Edison platform IO and sensor repository using MRAA and UPM from the Intel® IoT Developer Kit, a complete hardware and software solution to help developers explore the IoT and implement innovative projects.
  • Run this code sample in Intel® System Studio IoT Edition . Intel® System Studio IoT Edition lets you create and test applications on Intel-based IoT platforms.
  • Store air quality data using Azure Redis Cache* from Microsoft Azure*, Redis Store* from IBM Bluemix*, or ElastiCache* using Redis* from Amazon Web Services* (AWS), different cloud services for connecting IoT solutions including data analysis, machine learning, and a variety of productivity tools to simplify the process of connecting your sensors to the cloud and getting your IoT project up and running quickly.

What it is

Using an Intel® Edison board, this project lets you create an air quality reporter that:

  • continuously checks the air quality for airborne contaminants.
  • sounds an audible warning when the air quality is unhealthy.
  • stores a record of each time the air quality sensor detects contaminants, using cloud-based data storage.

How it works

This shop air quality monitor uses the sensor to constantly keep track of airborne contaminants.

If the sensor detects one of several different gases and the detected level exceeds a defined threshold, it makes a sound through the speaker to indicate a warning.

Also, optionally, the monitor stores the air quality data using the Intel® IoT Examples Data Store running in your own Microsoft Azure* account.

Hardware requirements

Grove* Home Automation Kit containing:

  1. Intel® Edison platform with an Arduino* breakout board
  2. Grove* Air Quality Sensor (http://iotdk.intel.com/docs/master/upm/node/classes/tp401.html)
  3. Grove Speaker (http://iotdk.intel.com/docs/master/upm/node/classes/grovespeaker.html)

Software requirements

  1. Intel® System Studio IoT Edition
  2. Microsoft Azure*, IBM Bluemix*, or AWS account (optional)

How to set up

To begin, clone the How-To Intel IoT Code Samples repository with Git* on your computer as follows:

$ git clone https://github.com/intel-iot-devkit/how-to-code-samples.git

To download a .zip file, in your web browser, go to https://github.com/intel-iot-devkit/how-to-code-samples and click the Download ZIP button at the lower right. Once the .zip file is downloaded, uncompress it, and then use the files in the directory for this example.

Adding the program to Intel® System Studio IoT Edition

** The following screenshots are from the Alarm clock sample, however the technique for adding the program is the same, just with different source files and jars.

Open Intel® System Studio IoT Edition, it will start by asking for a workspace directory. Choose one and click OK.

In Intel® System Studio IoT Edition , select File -> new -> Intel(R) IoT Java Project:

Give the project the name "AirQualitySensor" and then click Next.

You now need to connect to your Intel® Edison board from your computer to send code to it. Choose a name for the connection and enter the IP address of the Intel® Edison board in the "Target Name" field. You can also try to Search for it using the "Search Target" button. Click finish when you are done.

You have successfully created an empty project. You now need to copy the source files and the config file to the project. Drag all of the files from your git repository's "src" folder into the new project's src folder in Intel® System Studio IoT Edition. Make sure previously auto-generated main class is overridden.

The project uses the following external jars: gson-2.6.1. These can be found in the Maven Central Repository. Create a "jars" folder in the project's root directory, and copy all needed jars in this folder. In Intel® System Studio IoT Edition, select all jar files in "jars" folder and right click -> Build path -> Add to build path

Now you need to add the UPM jar files relevant to this specific sample. right click on the project's root -> Build path -> Configure build path. Java Build Path -> 'Libraries' tab -> click on "add external JARs..."

for this sample you will need the following jars:

  1. upm_grovespeaker.jar
  2. upm_gas.jar

The jars can be found at the IOT Devkit installation root path\iss-iot-win\devkit-x86\sysroots\i586-poky-linux\usr\lib\java

Connecting the Grove* sensors

You need to have a Grove* Shield connected to an Arduino-compatible breakout board to plug all the Grove devices into the Grove Shield. Make sure you have the tiny VCC switch on the Grove Shield set to 5V.

  1. Plug one end of a Grove cable into the Grove Air Quality Sensor, and connect the other end to the AO port on the Grove Shield.
  2. Plug one end of a Grove cable into the Grove Speaker, and connect the other end to the D5 port on the Grove Shield.

Data store server setup

Optionally, you can store the data generated by this sample program in a backend database deployed using Microsoft Azure*, IBM Bluemix*, or AWS, along with Node.js, and a Redis* data store.

For information on how to set up your own cloud data server, go to:

https://github.com/intel-iot-devkit/intel-iot-examples-datastore

Configuring the example

To configure the example for the optional data store, change the SERVER and AUTH_TOKEN keys in the config.properties file to the server URL and authentication token that correspond to your own data store server setup. For example:

  SERVER=http://intel-examples.azurewebsites.net/logger/air-quality
  AUTH_TOKEN=s3cr3t

Preparing the Intel® Edison board before running the project

In order for the sample to run you will need to copy some files to the Intel® Edison board. This can be done using SCP through SSH. The files need to be copied from the sample repository: 

Jar files- external libraries in the project need to be copied to "/usr/lib/java"

Running the program using Intel System Studio IoT Edition

When you're ready to run the example, make sure you have saved all the files.

Click the Run icon on the toolbar of Intel® System Studio IoT Edition. This runs the code on the Intel® Edison board.

Determining the IP address of the Intel® Edison board

You can determine what IP address the Intel® Edison board is connected to by running the following command:

ip addr show | grep wlan

You will see output similar to the following:

3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    inet 192.168.1.13/24 brd 192.168.1.255 scope global wlan0

The IP address is shown next to inet. In the example above, the IP address is 192.168.1.13.

IMPORTANT NOTICE: This software is sample software. It is not designed or intended for use in any medical, life-saving or life-sustaining systems, transportation systems, nuclear systems, or for any other mission-critical application in which the failure of the system could lead to critical injury or death. The software may not be fully tested and may contain bugs or errors; it may not be intended or suitable for commercial release. No regulatory approvals for the software have been obtained, and therefore software may not be certified for use in certain countries or environments.

NFV Performance Optimization for Virtualized Customer Premises Equipment

$
0
0
Paul Veitch
BT Research & Innovation
Ipswich, UK
paul.veitch@bt.com
Tommy Long
Intel Shannon
County Clare, Ireland
thomas.long@intel.com
Paul Hutchison
Brocade
Bracknell, UK
paul.hutchison@brocade.com

Abstract. A key consideration for real-world network functions virtualization solutions is the ability to provide predictable and guaranteed performance for customer traffic. Although many proof-of-concepts have focused on maximizing network throughput, an equally important—indeed in some use cases a more important performance metric—is latency.This paper describes a testbed at BT’s Adastral Park laboratories based on the Intel® Open Network Platform architecture, devised to characterize the performance of a virtual Customer Premises Equipment setup.

The test results highlight significant performance improvements in terms of reduction of latency and jitter using an optimized setup which incorporates the Data Plane Development Kit. For example, the average latency was reduced from between 38 percent and 74 percent (depending on the packet profile), while maximum latency was reduced by up to a factor of six. Such insights into performance optimization will be an essential component to enable intelligent workload placement and orchestration of resources in both current networks and future 5G deployments.

I. Introduction

Network functions virtualization (NFV) is rapidly moving from laboratory testbeds to production environments and trials involving real-world customers1. Standardization efforts continue apace via the European Telecommunications Standards Institute Industry Specification Group (ETSI ISG), where a key consideration is performance benchmarking and best practice2. Although network throughput is a key performance metric, this paper addresses the problem from a different angle, namely tackling the problem of latency sensitivity. In particular, the virtual Customer Premises Equipment (vCPE) use case for enterprises involves a mix of virtualized network functions (VNFs) that typically reside at the customer premise. In the ETSI use case document3, this is referred to as VE-CPE (Figure 1).

Examples of network functions at customer premises that can run as VNFs on a standard x86 server with hypervisor include, but are not limited to, routers, firewalls, session-border-controllers, and WAN accelerators. The branch sites will often require modest access to WAN bandwidth compared with the hub sites, that is, tens or hundreds of Mbit/s WAN link rather than >=1Gibt/s. From a performance perspective, therefore, there is less emphasis on maximizing throughput in a branch site vCPE implementation and a greater emphasis on ensuring the latency and jitter is kept to a minimum.

Figure 1: Virtual Enterprise CPE (VE-CPE) use case including branch locations.

Most corporate networks connecting branch sites across a WAN infrastructure will involve some proportion of Voice-over-IP (VoIP) traffic, and this will have much more stringent performance targets in terms of latency/jitter to ensure predictable and guaranteed performance. Even if voice-related network functions such as Session-Border Controllers (SBCs) have been implemented as “non-NFV” hardware appliances, if other functions at the customer premise that carry end-user traffic have been virtualized—an obvious example is the Customer Edge (CE) router—it is vital to ensure that the performance of the NFV infrastructure has been suitably “tuned” to provide some level of predictability for latency and jitter performance. This will provide a clearer view of the contribution that the NFV infrastructure components make to the overall performance characterization of latency-sensitive applications.

Section II explains the Intel® Open Networking Platform (Intel® ONP) and Data Plane Development Kit (DPDK), while Section III outlines the approach to testing a vCPE setup in terms of latency/jitter performance characterization. The actual test results are detailed in Section IV, followed by conclusions in Section V and recommended further work in Section VI.

II. Open Network Platform and Data Plane development kit

The way in which telecoms operators define and implement NFV solutions for specific use cases such as vCPE depend on a number of factors, including cost, technical criteria, and ensuring vendor interoperability. Combining these criteria results in increased motivation towards open-source solutions for NFV, for example those that leverage the use of Kernel-based Virtual Machine (KVM) hypervisor technology with Open vSwitch* (OVS), and open management tools such as OpenStack*. Intel ONP combines a number of such open source “ingredients” to produce a modular architectural framework for NFV4.

From a performance perspective, one of the key components of the Intel ONP architecture is the DPDK, which can be useful for maximizing performance characteristics for VNFs running on the KVM hypervisor. Figure 2(a) shows a simple overview of the standard Open vSwitch, while Figure 2(b) depicts the Open vSwitch with DPDK. In the standard OVS, packets that are forwarded between Network Interface Controllers (NICs) do so in the kernel space data path of the virtual switch that consists of a simple flow table indicating what to do with packets that are received. Only the first packets in a flow need to go to the user space of the virtual switch (using the “slow path”) as they do not match any entries in the simple table in the kernel data path. After the user space of the OVS handles the first packet in the flow, it will update the flow table in the kernel space so that subsequent packets in the flow are not sent to the user space. In this way, the number of kernel space flow table entries is reduced and the number of packets that need to traverse the computationally expensive user space path is reduced.

Figure 2: Overview of (a) Open vSwitch* and (b) Data Plane Development Kit vSwitch.

In the Open vSwitch with DPDK model (Figure 2(b)), the main forwarding plane (sometimes called the “fast path”) is in the user space of the OVS and uses DPDK. One of the key differences with this architecture is the fact that the NICs are now Poll Mode Drivers (PMDs), meaning incoming packets are continuously polled rather than being interrupt-driven in an asynchronous fashion. Initial packets in a flow are sent to another module in the user space, following the same path that is traversed by packets in the kernel fast path case.

Figure 3 depicts the actual traffic flows from OVS to the guest virtual machine (VM), which in the context of this paper is a virtual router function. In the case of standard OVS, the OVS forwarding is completed in the kernel space (Figure 3(a)), while for the OVS with DPDK model, the OVS forwarding is completed in the user space (Figure 3(b)), and the guest VM’s virtio queues are mapped to the OVS DPDK and hence can be read/written directly by the OVS. This “user space to user space” path should prove more performant than the “kernel-based” traffic path. Note that for both architectures, the guest VM can use either the DPDK or standard Linux* drivers. In the tests described later, the high-performance scenario uses a VNF with DPDK drivers.

Figure 3: Traffic flows: (a) Open vSwitch* and (b) Data Plane Development Kit vSwitch.

In theory, the Open vSwitch with DPDK should enable improved performance over the standard OVS model. However it is important to conduct specific tests to validate this practically. The next section describes the testbed setup, with the actual results explained in the subsequent section.

III. Overview of testbed

The key components of the testbed are shown in Figure 4.

Figure 4: Baseline and high-performance testbeds (specific hardware details shown relate to compute nodes that are “systems-under-test”).

There were two reference testbed platforms used for the purposes of exploring and comparing the impact of high-performance tuning such as DPDK on latency/jitter test results. Each testbed comprises a single OpenStack controller node and a corresponding compute node, built using the “Kilo” system release. Essentially the compute node and associated guest VNFs running on the hypervisor represent the “systems-under-test” (SUT).

The baseline setup uses an Intel® Xeon® processor E5-2680 (code-named Sandybridge) and does not include any BIOS optimizations. In contrast, the high-performance setup uses an Intel® Xeon® processor E5-2697 v3 (code-named Haswell) and includes certain BIOS tuning such as “maximize performance versus power,” and disablement of C-states and P-states. The baseline uses the standard kernel data path whereas the high-performance setup uses the OVS DPDK data path. Although both testbeds use Fedora* 21 as the base OS, the baseline uses a standard non-real-time kernel (3.18), whereas the high-performance setup uses Linux Real-Time Kernel (3.14) with a tuned configuration (isolation of vSwitch and VM cores from the host OS, disabling Security Enhanced Linux, using idle polling and also selecting the perfect Time-Stamp Counter clock). The baseline setup uses “vanilla” OpenStack settings to spin up the VM and assign network resources. In contrast, the high-performance setup is more finely tuned to allow dedicated CPUs to be pinned for the vSwitch and VNFs respectively. The high-performance setup also ensures that the CPUs and memory from the same socket are used for the VNFs, and the specific socket in use is that which connects directly to the physical NIC interfaces of the server.

In both testbeds, the actual VNF used in all tests was a Brocade 5600* virtual router R3.5, and a Spirent Test Center C1* load-testing appliance was used for running test traffic. In the high-performance scenario, the virtual router uses DPDK drivers. As shown in Figure 5, both single VNF and dual VNF service chain permutations were tested.

Figure 5: Systems-under-test: (a) single virtual router and (b) two virtual routers in series service chain.

The test cases were set up to ensure “safe” operational throughput for low-end branch offices (<=100Mbit/s) such that no adverse impact on the latency/jitter measurements would occur. The following packet profiles were used for all tests:

  • 64-byte frames (bidirectional load of 25 Mbps)
  • 256-byte frames (bidirectional load of 25 Mbps)
  • “iMix” blending frame sizes in a realistic representation (bidirectional load of 50 Mbps)
  • 1500-byte frames (bidirectional load of 100 Mbps)

The test equipment uses a signature with a timestamp to determine the latency between frames. This signature is at the end of the payload next to the FCS (Frame Check Sequence), and has a timestamp, sequence numbers, and stream ID. Jitter is defined here as the time difference between two arriving frames in the same stream. Hence this is a measure of packet delay variation. The same traffic load comprising a single traffic flow was generated to the SUT in each direction, and the results described in the following section capture the “worst-case” metrics observed for a particular direction (that is, the cited values of latency, jitter, and so on are for a single direction only and not round-trip values). It is also worth noting that the test results displayed are runtime in the sense that results are cleared after ~20 seconds, and then allowed to run for the designated duration to ensure that the first packets in a flow taking the slow path do not distort results.

IV. Test results

A. Mixed Tests

The average one-way latency measured over 5-minute durations for the four different packet profile scenarios is shown for the single VNF setup in Figure 6 and the dual VNF setup in Figure 7.

Figure 6:Average latency results in microseconds for single virtualized network function (lower is better).

The results for average latency across the range of packet profiles clearly show significantly improved performance (lower average latency) in the high-performance setup compared with the “vanilla” baseline testbed. For the single VNF tests, the average latency is reduced by a factor of between 38 percent and 74 percent, while in the dual VNF scenario the degree of reduction is between 34 percent and 66 percent. As would be expected, the dual VNF case involves higher overall latency results for both testbeds due to more packet switches between the VNF instances and through the virtual switches within the hypervisor. Note that zero packet loss was observed for these tests.

Figure 7: Average latency results in microseconds for two virtualized network function (lower is better).

B. Focus on 256-Byte Packet Tests

It is instructive to explore in more detail the results for a specific packet profile scenario. For example, the 256-byte packet tests are closely representative of VoIP frames generated using the Real-Time Transport Protocol (RTP) with G.711 encoding5. Figure 8 shows the minimum, average, and maximum one-way latency values for both single and dual VNF scenarios using 256-byte packets.

Figure 8: Detailed latency results in microseconds for 256-byte packets.

Figure 9: Detailed jitter results in microseconds for 256-byte packets.

Figure 9 shows the corresponding average and maximum one-way jitter values. The maximum values of latency and jitter are important for gauging worst-case performance characterization for both testbeds. Crucially, the maximum latency is reduced in the high-performance setup by factors of 6 and 7.4 for the single and dual VNF cases, respectively. The maximum jitter meanwhile is reduced by factors of 24 and 8.5 for the single and dual VNF cases, respectively. Note that zero packet loss was observed for these tests.

As well as assessing performance over short fixed duration intervals, it is important to understand the potential drift in performance over longer periods. Figure 10 shows the results for 5-minute maximum latency tests compared to 16-hour tests carried out using 256-byte packets and the single VNF setup. In essence, the test result highlights the similar performance improvement achieved using the optimized (that is, high performance) versus non-optimized (baseline) setup: the maximum latency is reduced by a factor of 5 for the 16-hour test and a factor of 6 for the 5-minute test. The key point to note however is that the maximum latency values are significantly higher in the 16-hour tests, which can be attributed to very occasional system interrupt events (that is, housekeeping tasks) which will have an impact on only a very small number of test packets. Despite this, the value of 2-msec maximum one-way latency for the 16-hour duration/256 byte packet test for the high-performance setup is still comfortably within the one-way transmission target of 150 msec for voice traffic, as specified in ITU-T specification G.1146. In other words, the 2-msec worst-case contribution added by the vCPE setup only amounts to 1.3% of the overall one-way recommended budget for latency. Indeed, even the non-optimized baseline setup comprising 9.95 msec maximum one-way latency is only 6.6% of this budget.

Figure 10: Soak test latency results in microseconds (16- hour compared to 5-minute tests).

V. Summary and Conclusions

This paper has demonstrated fine-tuning of a virtual CPE infrastructure platform based on a KVM hypervisor. Specifically leveraging some of the components of Intel ONP architecture such as the DPDK can provide significant improvements in performance over a baseline (that is, a non-optimized) setup. For the single VNF tests, the average one-way latency is reduced by a factor of between 38 percent and 74 percent, while in the dual VNF scenario, the degree of reduction is between 34 percent and 66 percent. For the more VoIP-representative test case using 256-byte packets, the maximum latency is reduced in the high-performance setup by factors of 6 and 7.4 for the single and dual VNF cases, respectively, while maximum jitter is reduced by factors of 24 and 8.5 for the single and dual VNF cases, respectively.

Based on the experimental results, it can be concluded that performance-tuning of NFV infrastructure for latency-sensitive applications such as VoIP will achieve better and more deterministic overall performance in terms of latency and jitter than a baseline (that is, non-optimized) setup. Whether a network operator decides to implement such optimizations will be driven largely by the required mix of VNFs being supported on the vCPE infrastructure, and the degree to which SLAs for performance metrics such as latency and jitter must be associated with the services underpinned by the VNFs. In practical terms, actual target SLAs used by network operators will cite performance targets within the scope of the operator’s own backbone/transit network domain, and will vary according to geography. IP Packet round-trip values of 30 msec to 40 msec for Europe, 40 msec to 50 msec for North America, and up to 90 msec for Trans-Atlantic links are typical examples and are targets for average latency.

If network operators do opt for optimized NFV setups using capabilities such as DPDK, rather than a vanilla out-of-the-box solution, they need to be aware of the possible impact on higher-layer orchestration solutions, which will need clearer visibility of underlying infrastructure parameters and settings to ensure VNFs with specific performance requirements are provisioned and managed accordingly. The experiments presented in this paper can be viewed as a foundation to help advance the understanding of such performance optimizations, equally applicable to current networks, and future 5G infrastructures.

VI. Future Work

Further research topics of interest include the following:

  • Consideration of “layers of optimization” and what their individual contribution and impact to the “overall optimization” is: hardware choices (specifically, the contribution of Intel® Xeon® processor E5-2680 and Intel® Xeon® processor E5-2697 v3 to the differences in latencies), BIOS settings (for example, consider P-state enablement to allow utilization of Enhanced Intel® SpeedStep® Technology for improved power/load efficiency), Real-Time Kernel tuning options (for example  “no hertz kernel,” read copy update/RCU polling), hypervisor settings, and VNF setup all contribute to the architecture. Therefore clearer visibility of possible optimizations and their effect at each layer should be assessed.
  • Similar tests can be considered to perform a performance characterization based on a richer suite of diverse VNF types, including VoIP-specific VNFs.
  • Test analytics can be further refined to assess profiling and frequency distributions of packet latency and jitter performance.
  • Further analysis of the impact of NFV optimizations on higher-level management: making an orchestrator aware of underlying resources and the ability to leverage specific fine-tuning of NFV infrastructure using capabilities like DPDK, adds complexity to the management solution, but makes it possible to customize the allocation of latency-sensitive VNFs onto the most suitable NFV infrastructure.

As is evident, there are a number of interesting challenges and problems yet to be addressed in this space.

References

1. J. Stradling. “Global WAN Update: Leading Players and Major Trends,” Current Analysis Advisory Report, Sept 2015.

2. “Network Functions Virtualization Performance & Portability Best Practices,” ETSI ISG Specification GS NFV-PER001, V1.1.1. June, 2014.

3. “Network Functions Virtualisation Use Cases,” ETSI ISG Specification GS NFV001, V1.1.1. October 2013.

4. “Intel® Open Network Platform Server (Release 1.5),” Release Notes, November 2015.

5. “NFV Performance benchmarking for vCPE,” Network Test Report, Overture Networks, May 2015.

6. “ITU-T Specification G114- One Way Transmission Time,” May 2003.

Code Sample: Access Control in Java*, (How-to Intel® IoT Technology Series)

$
0
0

Introduction

This access control system application is part of a series of how-to Intel Internet of Things (IoT) code sample exercises using the Intel® IoT Developer Kit, Intel® Edison development platform, cloud platforms, APIs, and other technologies.

From this exercise, developers will learn how to:

  • Connect the Intel® Edison development platform, a computing platform designed for prototyping and producing IoT and wearable computing products.
  • Interface with the Intel® Edison platform IO and sensor repository using MRAA and UPM from the Intel® IoT Developer Kit, a complete hardware and software solution to help developers explore the IoT and implement innovative projects.
  • Run this code sample in Intel® System Studio IoT Edition. Intel® System Studio lets you create and test applications on Intel®-based IoT platforms.
  • Set up a web application server to let users enter the access code to disable the alarm system, and store this alarm data using Azure Redis Cache* from Microsoft Azure*, Redis Store* from IBM Bluemix*, or ElastiCache* using Redis* from Amazon Web Services* (AWS), different cloud services for connecting IoT solutions including data analysis, machine learning, and a variety of productivity tools to simplify the process of connecting your sensors to the cloud and getting your IoT project up and running quickly.

What it is

Using an Intel® Edison board, this project lets you create a smart access control system that:

  • monitors a motion sensor to detect when a person is in an area that requires authorization.
  • can be accessed with your mobile phone via the built-in web interface to disable the alarm.
  • keeps track of access using cloud-based data storage.

How it works

This access control system provides the following user flow: 

  1. Passive infrared (PIR) motion sensor looks for motion.
  2. User sets off the motion detector and has 30 seconds to enter the correct code in the browser.
  3. If the user fails to enter the code in the given time, the alarm goes off.
  4. If the user enters the correct code, the system waits for 30 seconds before allowing the user to pass.

Additionally, various events (looking-for-motion, motion-detected, invalid-code, etc.) are logged. Optionally, all data can be stored using the Intel® IoT Examples data store running in your own Microsoft Azure* account.

Hardware requirements

Grove* Starter Kit Plus containing:

Software requirements

  1. Intel® System Studio IoT Edition
  2. Microsoft Azure*, IBM Bluemix*, or AWS account (optional)

How to set up

To begin, clone the How-To Intel IoT Code Samples repository with Git* on your computer as follows:

$ git clone https://github.com/intel-iot-devkit/how-to-code-samples.git

To download a .zip file, in your web browser, go to https://github.com/intel-iot-devkit/how-to-code-samples and click the Download ZIP button at the lower right. Once the .zip file is downloaded, uncompress it, and then use the files in the directory for this example.

Adding the program to Intel® System Studio IoT Edition

** The following screenshots are from the Alarm clock sample, however the technique for adding the program is the same but with different source files and jars.

Open Intel® System Studio IoT Edition. It will start by asking for a workspace directory; choose one then click OK.

In Intel® System Studio IoT Edition, select File -> new -> Intel® IoT Java Project:

Give the project the name "AccessControl" and then click Next.

You now need to connect to your Intel® Edison board from your computer to send code to it. Choose a name for the connection and enter the IP address of the Intel® Edison board in the "Target Name" field. You can also try to Search for it using the "Search Target" button. Click finish when you are done.

You have successfully created an empty project. You now need to copy the source files and the config file to the project. Drag all of the files from your git repository's "src" folder into the new project's src folder in Intel® System Studio IoT Edition. Make sure previously auto-generated main class is overridden.

The project uses the following external jars: gson-2.6.1, jetty-all-9.3.7.v20160115-uber, joda-time-2.9.2. These can be found in the Maven Central Repository. Create a "jars" folder in the project's root directory, and copy all needed jars in this folder. In Intel® System Studio IoT Edition, select all jar files in "jars" folder and right click -> Build path -> Add to build path

Now you need to add the UPM jar files relevant to this specific sample. right click on the project's root -> Build path -> Configure build path. Java Build Path -> 'Libraries' tab -> click on "add external JARs..."

For this sample you will need the following jars:

  1. upm_i2clcd.jar
  2. upm_biss0001.jar

The jars can be found at the IOT Devkit installation root path\iss-iot-win\devkit-x86\sysroots\i586-poky-linux\usr\lib\java

Connecting the Grove* sensors

You need to have a Grove* Shield connected to an Arduino*-compatible breakout board to plug all the Grove devices into the Grove Shield. Make sure the tiny VCC switch on the Grove* Shield is set to 5V.

  1. Plug one end of a Grove cable into the Grove PIR Motion Sensor, and then connect the other end to the D4 port on the Grove Shield.
  2. Plug one end of a Grove cable into the Grove RGB LCD, and connect the other end to any of the I2C ports on the Grove Shield.

Data store server setup

Optionally, you can store the data generated by this sample program in a backend database deployed using Microsoft Azure*, IBM Bluemix*, or AWS*, along with Node.js, and a Redis* data store.

For information on how to set up your own cloud data server, go to:

https://github.com/intel-iot-devkit/intel-iot-examples-datastore

Configuring the example

To configure the example for the optional data store, change the SERVER and AUTH_TOKEN keys in the config.properties file to the server URL and authentication token that correspond to your own data store server setup. For example:

  SERVER: "http://intel-examples.azurewebsites.net/logger/access-control"
  AUTH_TOKEN: "s3cr3t"

To configure the required access code to be used for the example app, change the CODE key in the config.properties file to whatever you want to use. For example:

  CODE: "4321"

Preparing the Intel® Edison board before running the project

In order for the sample to run, you will need to copy some files to the Intel® Edison board. Two sorts of files need to be copied from the sample repository:

  1. Jar files: external libraries in the project need to be copied to "/usr/lib/java"
  2. web files: files within site_contents folder need to be copied to "/var/AccessControl"

Running the program using Intel® System Studio IoT Edition

When you're ready to run the example, make sure you have saved all the files.

Click the Run icon on the toolbar of Intel® System Studio IoT Edition. This runs the code on the Intel® Edison board.

You will see output similar to the following when the program is running.

Stopping the alarm

The alarm is set using a single-page web interface served directly from the Intel® Edison board while the sample program is running.

The web server runs on port 8080, so if the Intel® Edison board is connected to Wi-Fi* on 192.168.1.13, the address to browse to if you are on the same network is http://192.168.1.13:8080.

Determining the IP address of the Intel® Edison board

You can determine what IP address the Intel® Edison board is connected to by running the following command:

ip addr show | grep wlan

You will see the output similar to the following:

3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    inet 192.168.1.13/24 brd 192.168.1.255 scope global wlan0

The IP address is shown next to inet. In the example above, the IP address is 192.168.1.13.

IMPORTANT NOTICE: This software is sample software. It is not designed or intended for use in any medical, life-saving or life-sustaining systems, transportation systems, nuclear systems, or for any other mission-critical application in which the failure of the system could lead to critical injury or death. The software may not be fully tested and may contain bugs or errors; it may not be intended or suitable for commercial release. No regulatory approvals for the software have been obtained, and therefore software may not be certified for use in certain countries or environments.

 

IoT Path-to-Product: The Making of an Intelligent Vending Machine

$
0
0

To demonstrate a rapid path-to-product IoT solution for retail using cloud data analytics, a proof of concept was created using the Intel® IoT Developer Kit and Grove* IoT Commercial Developer Kit that was scaled to an industrial solution using an Intel® IoT Gateway, industrial sensors, Intel® IoT Gateway Software Suite, Intel® System Studio, and Microsoft* Azure* cloud services. This solution monitors the inventory, product sales, and maintenance of a vending machine. The gateway gathers data from temperature sensors, stepper motors, a coil switch, and a product-purchasing application for edge data analytics.

This article contains an overview of the creation of the Intelligent Vending Machine Prototype. For a how to, see IoT Path-to-Product: How to Build an Intelligent Vending Machine.

Visit GitHub for this project's latest code samples and documentation.

 

A key vector of opportunity related to the Internet of Things (IoT) springs from adding intelligence to everyday devices, as a means to improve their operation as well as the efficiency and effectiveness of the business operations associated with them. For example, vending machines are ubiquitous, and the familiar, common machines with coin or currency acceptors represent a significant potential revenue stream for retailers of various types. It’s no wonder that the scope of goods available in vending machines has grown dramatically in recent years, including consumer electronics being commonly sold in airports and other facilities.

Figure 1. Completed intelligent vending machine.

Vending machines have the advantage over other retail points of presence that they operate 24 hours per day, 365 days per year, without the requirement for human cashiers. They also give distributors significant control where it would otherwise not be possible, such as in public spaces and office buildings. At the same time, however, vending machines require regular service: frequently to replenish the product being sold and less frequently for scheduled and unscheduled maintenance. 

Giving the owners of vending machines greater insight into the status of each unit in their vending fleet has the potential to improve the efficiency of that service effort. Intel® undertook a development project to investigate this and other opportunities associated with building an intelligent vending machine. The completed device is shown in Figure 1. This project drew inspiration in large part from the existing solution blueprint for intelligent vending machines from Intel and ADLINK Technologies. This document recounts the course of that project development effort. It begins with an abstract description of a structured project methodology, divided into phases, and then recounts the process of the project’s development in detail, phase by phase. 

Interested parties can use this narrative to retrace the steps taken by the Intel® project team in developing the intelligent vending machine. Perhaps more importantly, however, it can be generalized as a set of guidelines to address the needs of other types of projects. Intel® makes this document freely available and encourages the use of this methodology and process to drive inquiry, invention, and innovation for the Internet of Things.


Methodology

By its nature, IoT embraces open-ended innovation, with endless diversity of projects to add intelligence to objects from the simple to the complex, and from the mundane to the exotic. At the same time, every project builds on experience the industry has gained from IoT projects that have gone before, and best practices suggest structural elements in common among IoT projects in general.

To take advantage of those commonalities and help increase the chances of success during development, Intel has developed a structured approach to IoT project development. It consists of a six-phase model that guides the entire path to product, which begins with the first glimmer of an idea and follows through until the final solution is in commercial use. It is intended to be general enough that it can be adapted to the needs of any IoT project.

Initiation phases (1-3)

The first three phases of the project methodology are investigative. They focus on ideation and assessing the potential of the project to solve a given problem, in preparation for ultimately leading to a commercially viable product. As such, these phases value brainstorming and proof of concept over rigorously addressing finer points of design. 

Rapid prototyping is facilitated by using the Grove* IoT Commercial Developer Kit, which consists of an Intel® NUC system, Intel® IoT Gateway Software Suite, and the Grove* Starter Kit Plus (manufactured by Seeed). The project also uses the Arduino* 101 board.

Note: Known in the United States as “Arduino 101,” this board is known elsewhere as “Genuino* 101.” It is referred to throughout the rest of this document as the “Arduino 101” board.

  • Phase 1:Define the opportunity that the project will take advantage of. The first step of an IoT project is to identify the problem or opportunity that the project will address. Documentation at this stage should identify the opportunity itself, the value of the solution (to end users as well as the organizations that build and implement it), and limitations of the project concept, including design challenges and constraints.
  • Phase 2: Design a proof of concept to take advantage of the opportunity. The initial solution design proposes a practical approach to building the solution, including hardware, software, and network elements. The design must address the design challenges and constraints identified in Phase 1 to the extent that is possible before building a prototype, including due consideration of factors such as cost and security.
  • Phase 3: Build and refine the proof of concept. The solution prototype is based on the design established in Phase 2, making and documenting changes as needed. Design changes based on shortcomings and additional opportunities identified during testing should be documented as part of this phase.

Completion phases (4-6)

The last three phases of the project methodology proceed only after the decision has been made to go forward with productizing the solution. As such, these phases are explicitly concerned with hardening the solution in terms of stability, security, and manageability, preparing it for mass production, and monetizing it to realize its commercial potential.

The completion phases of the project involve shifting the solution to industrial-grade sensors and related components, building it out using a commercial-grade gateway, and finalizing the feature set.

  • Phase 4: Produce a solid beta version. Once the project has been approved as a viable solution to develop toward production, the next step is to produce a product-oriented version that is explicitly intended to finalize the design. This version represents a significant investment in resources, including commercial-grade sensors and other components, as well as a commercial IoT gateway.
  • Phase 5: Evaluate functionality and add features. The completed beta version of the solution is tested to verify that it functions correctly according to the design parameters. As part of the testing process, the project team also identifies additional features and functionality and incorporates them into the solution to make it more robust and valuable to end users.
  • Phase 6: Finalize design and move into production. Once the product is feature-complete, the team hardens the solution by adding advanced manageability and security features, as well as optimizing the design as needed to enhance factors such as marketability and efficiency of manufacturing. The production user interface (UI) for the solution is finalized. This phase also includes final planning for merchandising and marketing the solution before moving into full production.

Phase 1: Defining the opportunity 

While traditional vending machines represent lucrative revenue streams, they are woefully inefficient. Each machine must be serviced on a regular basis by a human attendant to replenish the machine’s stock of product. This task is typically handled by assigning machines to regular routes that are followed by personnel in trucks. 

To understand the inherent inefficiency in this approach, consider the activity along a route that includes a high-rise office building. Here, the attendant pulls up in front of the building and has the choice either to guess what will be needed in the machines up on the 15th and 20th floors, bring the product up, and then make another round trip to bring the rest of what was needed, or else to make a dedicated inventory trip with a notepad in hand. Either approach takes needless time and effort that costs the vending company money. 

Moreover, distributors must seek a balance between dispatching too many trips by attendants (wasting payroll hours) or too few (leaving machines depleted of stock and missing out on revenue). The situation becomes even more problematic because the distributor must depend to some degree on end-customers to report when a machine is out of order.

Project initiators at Intel® determined that an intelligent vending machine is viable as the basis for a potential project to demonstrate IoT capabilities and the project methodology described in this document. That core group identified skill sets that were likely to be required during the project, including project management, programming, cloud architecture, and documentation. Based on that list of required skills, the core group formed the full project team, with personnel on the project mostly drawn from Intel employees, with external personnel included in a small number of instances to round out the expertise of the team.

The full project team’s first order of business was to quantify the potential opportunity associated with the project, as the basis for the initial prototype design. The core opportunity for this use case was identified as enabling a vending machine to intelligently monitor its level of product inventory and its operational status, and to be able to report that information back through an IoT gateway to the cloud.

The team elected to integrate cloud resources for the data store and administrative functionality. The goal of this approach was to facilitate a fully connected and scalable solution that optimize operations using an overall view of a fleet of vending machines. The key value to the cloud approach lies in the potential for analytics, which could potentially predict sales to optimize the supply chain among many distributed machines. It could also be used to optimize the efficiency of the personnel who replenish the inventory in the machines and perform unscheduled mechanical maintenance.

Phase 2: Designing the proof-of-concept prototype 

The project team determined that, for this project to be as useful as possible for the developer community, it should be based on readily available parts and technologies. Based on that decision, it was decided to limit the bill of materials to the Grove* IoT Commercial Developer Kit, Intel® IoT Developer Kit, and Intel® IoT Gateway Software Suite (https://software.intel.com/en-us/iot/hardware/gateways/wind-river), using software technologies that are widely used in the industry and available at low or no cost, with the use of free open-source software (FOSS) wherever practical.

To accelerate the prototype stage and reduce its complexity, the team elected to build the local portion of the prototype as a bench model that would consist of the compute platform and sensors, without incorporating an actual vending machine, although such a device would be added at a future stage of the project.

Prototype hardware selection

The Intel® NUC Kit DE3815TYKHE small-form-factor PC was chosen for this project. This platform is pictured in Figure 2, and its high-level specifications are given in Table 1. In addition to its robust performance, the team felt that, as Intel’s most recently introduced hardware platform specifically targeting IoT, it was a forward-looking choice for this demonstration project. Based on the Intel® Atom™ processor E3815, the Intel NUC offers a fanless thermal solution, 4 GB of onboard flash storage (and SATA connectivity for additional storage), as well as a wide range of I/O. The Intel NUC is conceived as a highly compact and customizable device that provides capabilities at the scale of a desktop PC.

To simplify the process of interfacing with sensors, the team elected to take advantage of the Arduino* ecosystem using the Arduino* 101 board, also shown in Figure 2, with specifications given in Table 1. This board makes the Intel NUC both hardware and pin compatible with Arduino shields, in keeping with the open-source ideals of the project team. While Bluetooth* is not used in the current iteration of the project, the board does have that functionality, which the team is considering for future use.

Figure 2. Intel® NUC Kit DE3815TYKHE and Arduino* 101 board.

 


Table 1. Prototype hardware used in intelligent vending project.

 

Intel® NUC Kit
DE3815TYKHE

Arduino* 101
Board

Processor/
Microcontroller

Intel® Atom™ processor E3815 (512K Cache, 1.46 GHz)

Intel® Curie™ Compute Module @ 32 MHz

Memory

8 GB DDR3L-1066 SODIMM (max)

  • 196 KB flash memory

  • 24 KB SRAM

Networking / IO

Integrated 10/100/1000 LAN

  • 14 Digital I/O pins

  • 6 Analog IO pins

Dimensions

190 mm x 116 mm x 40 mm

68.6 mm x 53.4 mm

Full Specs

specs

specs

For the sensors and other components needed in the creation of the prototype, the team chose the Grove* Starter Kit for Arduino* (manufactured by Seeed Studio), which is based on the Grove* Starter Kit Plus used in the Grove* IoT Commercial Developer Kit. This collection of components is available at low cost, and because it is a pre-selected set of parts, it reduces the effort required to identify and procure the bill of materials for IoT prototypes in general. Selection of sensors and other components for the prototype (detailed in the following section) are guided by the following key data:

  • Internal temperature of the machine
  • Inventory levels of each vendable item in the machine
  • Door open or closed status
  • Detection of a jam in the vending coil

Prototype software specification

For the prototype OS, the team considered Yocto Linux* as well as the Intel® IoT Gateway Software Suite. Yocto Linux supports the project’s ideal of using free open-source software (FOSS), and it offers a high degree of flexibility, as well as robust control over the source code and the ability to create a custom lightweight embedded OS that is tailored to the needs of the system. Intel IoT Gateway Software Suite, on the other hand, provides an out-of-the-box implementation, without requirements for customization. The team identified this combination of factors as a best practice for prototype development, and so Intel IoT Gateway Software Suite was chosen as the OS for the prototype.

The following applications were identified to be developed as part of the solution:

  • Control application will run on the vending machine itself, gathering data from sensors and handling operation of the electromechanical aspects of the solution (e.g., turning the vending coils) as well as data exchange with both human users (e.g., customers and administrators) and with the cloud.
  • Administration application will operate on a PC or tablet and allow for a detailed view into the operation of the vending machine, including events, status, and logs, as well as access to the cloud data and analytics. This application will also support regular maintenance.
  • Customer application will operate on a smartphone or other mobile device, enabling a customer to purchase products from the machine. 

Phase 3: Building and refining the proof-of-concept prototype

Using the Intel NUC Kit DE3815TYKHE, the Arduino 101 board, and the Grove Starter Kit Plus IoT Edition, the team developed the proof of concept prototype illustrated in Figure 3 to simulate a simple vending machine that dispenses two products. It includes a 2x16-character LCD display that shows product names and price information, as well as two product selection buttons, a step motor to dispense products, and two LEDs (green and red) to show the machine status. It also includes a temperature sensor and a “fault detection” button. Once the buy button on the customer application is pressed, the product is dispensed; for simplicity, payment processing hardware was left out of the prototype.

Figure 3. Intelligent vending machine proof of concept prototype.

Prototype Hardware Implementation

The bill of materials for the prototype is summarized in Table 2.

Table 2. Intelligent vending machine prototype components.

 

Component

Details

Base System

Intel® NUC Kit DE3815TYKHE

http://www.intel.com/content/www/us/en/support/boards-and-kits/intel-nuc-kits/intel-nuc-kit-de3815tykhe.html

Arduino* 101 Board

https://www.arduino.cc/en/Main/ArduinoBoard101

USB Type A to Type B Cable

For connecting Arduino 101 board to NUC

Components from Grove* Starter Kit Plus IoT Edition

Base Shield V2

http://www.seeedstudio.com/depot/Base-Shield-V2-p-1378.html

Gear Stepper Motor with Driver

http://www.seeedstudio.com/depot/Gear-Stepper-Motor-with-Driver-p-1685.html

Button Module

http://www.seeedstudio.com/depot/Grove-Button-p-766.html

Temperature Sensor Module

http://www.seeedstudio.com/depot/Grove-Temperature-Sensor-p-774.html

Green LED

http://www.seeedstudio.com/depot/Grove-Green-LED-p1144.html

Red LED

http://www.seeedstudio.com/depot/Grove-Red-LED-p-1142.html

LCD with RGB Backlight Module

http://www.seeedstudio.com/depot/Grove-LCD-RGB-Backlight-p-1643.html

Touch Sensor

http://seeedstudio.com/depot/Grove-Touch-Sensor-p-747.html

Prototype Software Implementation

The control application used in the proof of concept prototype was written in C++. It also uses a Node.js component for accessing the Azure cloud. The cloud is used to exchange events with the mobile and administration applications. Such events include for example, temperature alerts and product dispense requests. The mobile application was written in JavaScript* for use in a web browser, to avoid the necessity of migrating the application to multiple smartphone platforms.

The development environment used to develop the software for this solution was Intel® System Studio, a plug-in for the Eclipse* IDE that facilitates connecting to the NUC and developing applications in C/C++.

In addition, the development of this solution used Libmraa, a C/C++ library that allows for direct access to I/O on the NUC, as well as Firmata, which allows for programmatic interaction with the Arduino development environment, taking advantage of Arduino’s hardware abstraction capabilities. Abstracting Firmata using Libmraa enables greater programmatic control of I/O on the NUC, simplifying the process of gathering data from sensors. UPM provides the specific function calls that are used to access sensors.

Phase 4: Producing a Solid Beta Version

With a proof of concept prototype up and running, the team turned its attention to creating the production version of the intelligent vending machine. The solution as a whole was conceived as including the following main parts:

  • Vending machine, which dispenses products and communicates data back through the gateway. This part of the solution is complex and custom-built, including a variety of sensors and related components.
  • Gateway, which is purchased as a commercial product based on Intel architecture and implemented using custom-developed software.
  • Administration and Customer application, implemented in JavaScript, which is used to control the solution as a whole and to generate and access cloud-based analytics.
  • Cloud analytics, based on Microsoft Azure, which allow for the development of insights to improve business processes, based on usage data from the vending machine over time.

Selecting Vending Machine Components

An early effort in the completion phases of the project involved selecting the specific components that would make up the final solution in production.

Vending Machine Device Procurement

Whereas the team had elected to create the proof of concept prototype as a board-level simulation of a vending machine, the production version was to be an actual, functioning vending machine. The team investigated having a custom machine purpose built, as well as purchasing a used machine that could be retrofitted for the purposes of this project. Ultimately, a custom machine was selected in order to support selling the widest possible range of products. The initial specification for the custom vending machine and a picture of the machine itself at an early stage of fabrication are shown in Figure 4.


Vending Machine Model Specification

Custom tabletop vending machine for a variety of small products packed in boxes, blister packs, or bags. Each of three coil-driven vend trays will be configured for a different size product:

  • 3” close pitch coil for 12-14 small blister packs
  • 4” medium pitch coil for 9-12 medium size boxes or packages
  • large 5-6” coil for 6-8 larger packages such as t-shirts

The coils will be driven from one stepper motor per coil and drop product in to a singlewide tray at the bottom of the machine. There will be a plexi-window to view vend selections and optional cutouts for Intel choice of flatscreen and/or keypad.

Machine body will be powder-coated steel, trays will be aluminum, and vend coils will be plated steel. Front of machine will open to refill product, and rear will open to install and service vending mechanism. 

Approximate dimensions of machine will be 24-30” deep x 36” tall x 30” wide. Overall target weight is under 70lbs.

Customer applications allows for purchasing products.


Figure 4. Vending machine specification and photo of device during fabrication.

Other key decisions to be made at this stage included the choice of industrial-grade sensors, an Intel® architecture-based commercial gateway, a fully supported production OS, a cloud service for data storage and analytics, and software technologies for the administration and customer application.

Sensors and Related Components Selection

Industrial-grade sensors and related components to replace those from the Grove Starter Kit that were used in the proof of concept prototype are detailed in Table 3.

Table 3. Production intelligent vending machine components.

Component

Details

Vending Machine Model

Custom-fabricated:

  • Chassis with a hinged front door and removable back panel

  • Removable tray with three coils for dispensing products

  • Three stepper motors (one per coil), each equipped with switches for sensing a full coil rotation

  • Removable tray for electronic parts

Dell Wyse* IoT Gateway

https://iotsolutionsalliance.intel.com/solutions-directory/dell-iseries-wyse-3290

USB Type A to Micro-USB Type B Cable

Connects I2C/GPIO controller to gateway

12V 5A Power Supply

For stepper motor driver board

UMFT4222EV USB to I2C/GPIO Controller

http://www.mouser.com/new/ftdi/ftdiumft4222ev/

PCA9555-Based GPIO Expander

http://www.elecfreaks.com/store/iic-gpio-module-p-692.html

SparkFun Quadstepper Motor Driver Board

https://www.sparkfun.com/products/retired/10507

AM2315 Temperature and Humidity Sensor

https://www.adafruit.com/product/1293

Grove LCD with RGB Backlight Module

http://www.seeedstudio.com/depot/Grove-LCDRGB-Backlight-p-1643.html

Red LED Panel Mount Indicator

http://www.mouser.com/ProductDetail/VCC/CNX714C200FVW

White LED Panel Mount Indicator

http://www.mouser.com/ProductDetail/VCC/CNX714C900FVW

Gateway Selection

Factors in selecting the gateway to be used in the product version of the intelligent vending machine included the following:

  • Robust compute resources to ensure smooth performance without errors due to bogging down during operation, particularly considering the need for communication with the cloud as part of normal usage.
  • Ready commercial availability was clearly needed so the project could proceed on schedule. While some members of the team expressed preference for the Vantron VT-M2M-QK gateway, difficulty in obtaining that device in a timely manner disqualified it from use in the project.

Ultimately, the Dell iSeries Wyse 3290 IoT Gateway, specifications of which are summarized in Table 4, was chosen for implementation in the product phase of this project. That gateway provides the needed performance for present and foreseeable functionality, as well as ready availability (potentially in large quantities) for hypothetical distribution of the vending machine as a commercial product.

Table 4. Gateway specifications for intelligent vending machine product phase.

 

Dell iSeries Wyse* 3290 IoT Gateway

Processor

Intel® Celeron® processor N2807 (1 M cache, up to 2.16 GHz)

Memory

4 GB DDR3 RAM 1600 MHz

Networking

  • LAN: 1 x 10/100/1000 BASE-T

  • WLAN: 802.11a/b/g/n/ac

  • PAN: Bluetooth 4.0 Low Energy

Physical Specifications

  • Dimensions: 69mm x 197.5mm x 117mm

  • Weight: 2.34kg

Continuing to use Intel IoT Gateway Software Suite (which the prototype was already based on) was a straightforward decision, particularly because the gateway is pre-validated for that OS. Moreover, Intel NUCs and gateways can both run Intel IoT Gateway Software Suite, simplifying the process of porting software elements from the prototype to the product version of the intelligent vending machine model. Likewise, the other core software components such as Intel System Studio and the libraries used in the prototype were held constant to simplify the transition to the product phase.

Online Operation

The system includes the software running on the IoT gateway, the Azure cloud, and a server-side application, as illustrated in Figure 5.

Figure 5. Intelligent vending machine topology: online operation.

 

IoT Gateway Software Implementation

  • The IoT gateway software consists of three parts: Control application is implemented in C++ using the IoT Developer Kit libraries libmraa and libupm; it performs the following tasks:

    Check for mechanical failures and report failure/no-failure events to the local database.
    Monitor for temperature fluctuations in and out of the allowed range, reporting events when the temperature goes out of and returns to that range.
    Check for events from product selection buttons that occur in the customer application, followed by the application generating a “dispense” event, which is sent to the machine through the cloud.

  • Local DB is used for inter-process communication between the control application and the DB daemon. The local SQLite database uses the file $HOME/Vending_Prototype/events.sqlite3, which contains the “events” table with the events to be reported to the cloud. The events table is replicated both ways to and from the machine.
  • DB daemon is implemented using Node.js; it sends reported events bi-directionally between the local database and the cloud.

Azure Cloud Implementation

The Azure cloud maintains information about product inventory for the intelligent vending machine, keeps track of the events received from customer app and vending machine, and provides functionality to analyze this data and trigger responses to various conditions (e.g., low inventory or mechanical failure). Primary cloud analytics functions are as follows:

  • If a product is out of stock, that information is sent to the cloud and an alert displays in the admin app for the user. 
  • If the vending machine internal temperature reaches above or below a preset threshold, that information is sent to the cloud for analysis. An alert displays in the admin app for the user. 
  • If any of the three vending machine coils function improperly, that information is sent to cloud for analysis. An alert displays in the admin app for the user. 
  • If the vending machine tray is pulled out, “Machine Opened” status displays red in LCD and LED. Once the tray is pushed back in, “Machine is ready” status displays green in LCD and LED.

The Admin app provides information regarding home, setup, log history, inventory status and alert details.

Phase 5: Finalizing Design and Releasing to Production

The project team tasked with developing this solution was engineering-centric, and so producing first-rate UIs for the final product was somewhat outside the team’s core competency. Therefore, the team engaged an external resource on a contract basis for that purpose. The UI provider participated in regular team meetings and dedicated meetings with the core software-development team.

During those discussions, the UIs were refined to incorporate additional capabilities and functionality. For example, the team added color coding and options for switching between Fahrenheit and Celsius temperatures to the administration application UI. Functionality was added to the customer application UI asking users to verify that they wish to make the purchase before the transaction is made, along with other minor refinements.

Admin Application 

The administration application UI, shown in Figure 6, is designed to operate on a tablet computer and to provide administrative functionality on the intelligent vending machine. 

Figure 6. Intelligent vending machine administration application UI.

The administration application UI incorporates the following primary elements:

  1. Menu system contains a “home” button to go the home screen (shown in the figure), an “About” screen with information about the software, a “Setup” button that provides hardware-setup details (including placement and connectivity of sensors), a “Log” button to access an events log that tracks purchases, alerts, and maintenance, and an “Alert” button that provides information about active maintenance alerts, including the type and time of occurrence for each alert.
  2. Inventory panel reflects inventory levels that are set within the cloud, using color coding to indicate those levels: dark blue for levels above two thirds of capacity, lighter blue for levels one third to two thirds, and orange for levels below one third. Clicking on the panel generates a detailed inventory window that displays exact inventory quantities, which tray the item is in, and price for each item.
  3. Temperature module is a dual-threshold radial temperature graph, with display of the machine’s current internal temperature selectable as Fahrenheit or Celsius. The white bar represents the acceptable temperature range; if the temperature goes outside that range, the system generates an alert. The software polls the temperature and updates the UI every few seconds.
  4. Coil status module reports on the status of the vending coils and motors, indicating if there is any malfunction, such as a jam or electrical failure.
  5. Vending unit module provides visual information about the presence and location of error conditions, as well as the door open/closed status.

Customer Application 

The customer application, shown in Figure 7, is designed to operate on a mobile device, allowing customers to interact with the vending machine in order to make a purchase. 

Figure 7. Intelligent vending machine customer application.

The customer application incorporates the following primary elements:

  • Status pane indicates whether the machine is ready for an order to be made, and it also functions as a shopping cart to display a list of items selected by the user, pending sale. When items are added to the shopping cart, a “Buy” button appears that indicates the sale total; clicking on that button completes the purchase by sending the order information to the cloud and upon receipt of confirmation from the cloud, dispensing the item and updating the inventory number.
  • Ordering pane contains a selection button for each product in the machine; when clicked, the button adds the item to the shopping cart list in the status pane. Each product button is accompanied by fields that display the amount of inventory in stock as well as the price of the item.

Completed Production Vending Machine

The assembled intelligent vending machine, with the gateway, sensors, and other components installed, is shown in Figure 9.

Figure 9. Fully assembled intelligent vending machine.

 

Phase 6: Evaluating Functionality and Adding Features

Once the actual product version of the vending machine was operational, several team members began to identify possible future functionality that could be built into the product.

Enhancing Cloud Analytics

The team identified the opportunity to enhance cloud-analytics functionality using the Microsoft Power BI service and Power BI Desktop, a cloud-hosted business intelligence and analytics service that is integrated into Microsoft Azure. These capabilities offer data-visualization enhancements for the intelligent vending solution.

Enhancing Event Notification Data Flows

During the evaluation phase, the team identified possibilities to automate certain aspects of the machine’s operation using event notifications in conjunction with Azure analytics. Specifically, future enhancements based on the following data flows were identified:

  • Inventory. If the quantity of a product drops to two units, a future enhancement could cause a notification to be sent to the cloud for analysis, and an alert could be sent to the administration application as a notification to reorder stock. This sequence could be repeated if the quantity drops to zero, and notification could also be sent to the machine display and the mobile application, indicating that the item is out of stock.
  • Maintenance. If the machine malfunctions (e.g., the coil fails to make a full turn, the temperature goes outside preset limits, etc.), a future enhancement could cause a notification to be sent to the cloud for analysis, and service personnel could be notified. An alert could also be sent to the administration application to monitor the status of the service call.

Conclusion

Tracing the path to product during the development of the intelligent vending machine is intended as a pattern for teams to consider as they build their own solutions. Beginning with an ideation phase and rapid prototyping on low-cost equipment and a simplified physical model allows projects to take off quickly. Decisions can therefore be made early about the potential viability of the project, when a relatively small investment in time and money has been made.

This project also suggests a model for thinking about cloud analytics in IoT solutions. Rather than focusing just on opportunities for big-data insights, this implementation reveals how the cloud can function foremost as a communication nexus and centralized data store. At the same time, the cloud data provides substantial opportunities for generating business intelligence to optimize supply chains, increase maintenance efficiency, and enhance profitability.

More Information

IoT Path-to-Product: How to Build an Intelligent Vending Machine

$
0
0

This Internet of Things (IoT) path-to-product project is part of a series of articles that portray how to develop a commercial IoT solution from the initial idea stage, through prototyping and refinement, to create a viable product. It uses the Grove* IoT Commercial Developer Kit, with the prototype built on an Intel® Next Unit of Computing (NUC) Kit DE3815TYKHE small-form-factor PC and Arduino* 101 board.

This document demonstrates how to build a prototype and utilize these same technologies in deploying an Intel IoT Gateway and industrial sensors. It does not require special equipment or deep expertise, and as such, it is intended to be instructive toward developing generalized prototyping phases for IoT projects.

Note: Known in the US as “Arduino* 101,” this board is known elsewhere as “Genuino* 101.” It is referred to throughout the rest of this document as the “Arduino 101” board.

This article contains a how to on building an Intelligent Vending Machine Prototype. To see the making of, see IoT Path-to-Product: The Making of an Intelligent Vending Machine.

Visit GitHub for this project's latest code samples and documentation.

 

Introduction

The completed intelligent vending machine is shown in Figure 1. From this exercise, developers will learn to do the following:

Figure 1. Completed intelligent vending machine.
  • Connect to the Intel NUC Kit DE3815TYKHE.
  • Interface with the IO and sensor repository for the NUC using MRAA and UPM from the Intel® IoT Developer Kit, a complete hardware and software solution to help developers explore IoT and implement innovative projects.
  • Run this code sample in Intel® System Studio, an IDE for creating applications that interact with sensors and actuators, enabling a quick start for developing software for Intel IoT platforms.
  • Set up and connect to cloud services using the Microsoft Azure* platform, which provides cloud analytics and a communications hub for various parts of the solution.

What it does

This project enables you to simulate the following functionality of an intelligent vending machine:

  • Vend multiple products
  • Track inventory levels of products in the vending machine
  • Send alerts when the inventory levels fall below a preset threshold
  • Identify power start-up and malfunctions 
  • Notify when the vending machine door is open
  • Display coil status
  • Monitor and send alerts on internal temperature of the machine
  • Provide a visual log of past maintenance, inventory, and events
  • Provide a companion app for purchasing products

How it works

This intelligent vending machine prototype utilizes sensors to trigger a variety of different actions:

  • Buttons simulate the selection of specific products from the vending machine
  • Stepper motor turns to dispense product 
  • Red and green LEDs indicate failure and OK status
  • Temperature sensor monitors interior of vending machine
  • LCD displays vending machine status

All data flows occur through the cloud as an intermediary. For example, if a customer selects a product using the mobile application, the selection is passed to the cloud and then to the vending machine itself. This approach enables inventory levels to be maintained in the cloud. 

Similarly, data related to product offerings, pricing, and maintenance events are centrally managed in the cloud, enabling comprehensive trend analysis and reporting. Maintenance alerts, for example, can also be sent to service personnel as well as updating the administration application for tracking purposes. Note that while the potential for these capabilities is considered within the design of the intelligent vending machine prototype, they are not implemented as part of this demo.

Set up the Intel NUC Kit DE3815TYKHE

This section gives instructions for installing the Intel® IoT Gateway Software Suite on the NUC.

Note: If you have acquired a Grove IoT Commercial Developer Kit, the Intel IoT Gateway Software Suite is already pre-installed on the NUC.

  1. Create an account on the Intel® IoT Platform Marketplace if you do not already have one.
  2. Download the Intel IoT Gateway Software Suite and follow the instructions received by email to download the image file.
  3. Unzip the archive and write the .img file to a 4GB USB drive.

    On Microsoft Windows*, you can use a tool like Win32 Disk Imager*: https://sourceforge.net/projects/win32diskimager.
    On Linux*, use sudo dd if=GatewayOS.img of=/dev/sdX bs=4M; sync, where sdX is your USB drive.

  4. Unplug the USB drive from your system and plug it into the NUC, along with a monitor, keyboard, and power cable.
  5. Turn on the NUC and enter the BIOS by pressing F2 at boot time.
  6. Boot from the USB drive, as follows:

    a. From the Advanced menu, select Boot.
    b. From Boot Configuration, under OS Selection, select Linux.
    c. Under Boot Devices, make sure the USB check box is selected.
    d. Save the changes and reboot.
    e. Press F10 to enter the boot selection menu and select the USB drive.

  7. Log in to the system with root:root.
  8. Install Wind River Linux on local storage with the following command:
    ~# deploytool -d /dev/mmcblk0 --lvm 0 --reset-media –F

    Note: Due to the limited size of the local storage drive, we recommend against setting a recovery partition. You can return to the factory image by using the USB drive again.

  9. Use the poweroff command to shut down your gateway. Then, unplug the USB drive, and turn the gateway back on to boot from the local storage device.
  10. Plug in an Ethernet cable and use the command ifconfig eth0 to find the IP address assigned to your gateway (assuming you have a proper network setup).
    You can now use your gateway remotely from your development machine if you are on the same network as the gateway. If you would like to use the Intel® IoT Gateway Developer Hub instead of the command line, enter the IP address into your browser and go through the first-time setup.
  11. Use the Intel® IoT Gateway Developer Hub to update the MRAA and UPM repositories to the latest versions from the official repository (https://01.org). You can achieve the same result by entering these commands:
     ~# smart update
     ~# smart upgrade
     ~# smart install upm
  12. Use the following commands to install Java* 8 support (after executing the previous commands). These remove the precompiled OpenJDK* 7 and install OpenJDK* 8 , which works with MRAA and UPM:
    ~# smart remove openjdk-bin
    ~# smart install openjdk-8-jre
  13. Plug in an Arduino 101 board and reboot the NUC. The Firmata* sketch is flashed onto the Arduino 101, and you are now ready to use MRAA and UPM with it.

Set up the Arduino* 101 board

Setup instructions for the Arduino* 101 board are available at https://www.arduino.cc/en/Guide/Arduino101

Connect other components

This section covers making the connections from the NUC to the rest of the hardware components. The bill of materials for the prototype is summarized in Table 1, and the assembly of those components is illustrated in Figure 2.

Table 1. Intelligent vending machine prototype components.

 

Component

Details

Base System

Intel® NUC Kit DE3815TYKHE

http://www.intel.com/content/www/us/en/support/boards-and-kits/intel-nuc-kits/intel-nuc-kit-de3815tykhe.html

Arduino* 101 Board

https://www.arduino.cc/en/Main/ArduinoBoard101

USB Type A to Type B Cable

For connecting Arduino 101 board to NUC

Components from Grove* Starter Kit Plus IoT Edition

Base Shield V2

http://www.seeedstudio.com/depot/Base-Shield-V2-p-1378.html

Gear Stepper Motor with Driver

http://www.seeedstudio.com/depot/Gear-Stepper-Motor-with-Driver-p-1685.html

Button Module

http://www.seeedstudio.com/depot/Grove-Button-p-766.html

Temperature Sensor Module

http://www.seeedstudio.com/depot/Grove-Temperature-Sensor-p-774.html

Green LED

http://www.seeedstudio.com/depot/Grove-Green-LED-p1144.html

Red LED

http://www.seeedstudio.com/depot/Grove-Red-LED-p-1142.html

Touch

http://www.seeedstudio.com/depot/Grove-Touch-Sensor-p-747.html

LCD with RGB Backlight Module

http://www.seeedstudio.com/depot/Grove-LCD-RGB-Backlight-p-1643.html

 
Figure 2. Intelligent vending machine proof of concept prototype.

 

Install Intel® System Studio

Intel® System Studio is a plug-in for Eclipse* that allows you to connect to, update, and program IoT projects on an Intel NUC or other compatible board. It helps you write applications in C, C++, and Java languages and provides two libraries, specially designed for the Intel® IoT Developer Kit:

  • MRAA is a low-level library that offers a translation from the input/output interfaces to the pins available on your IoT board.

  • UPM is a sensor library with multiple language support that utilizes MRAA. UPM allows you to conveniently use or create sensor representations for your projects.

Install on Windows*

Note: 7-Zip* supports extended path names, which some files in the compressed file have, so use only 7-Zip software to extract the installer file.

  1. Download the 7-Zip software from http://www.7-zip.org/download.html.
  2. Right-click on the downloaded executable and select Run as administrator.
  3. Click Next and follow the instructions in the installation wizard to install the application.
  4. Using 7-Zip, extract the installer file.

Warning: Be sure to extract the installer file to a folder location that does not include any spaces in the path name. For example, the folder C:\My Documents\ISS will not work, while C:\Document\ISS will.

Install on Linux*

  1. Download the Intel® System Studio installer file for Linux*.
  2. Open a new Terminal window.
  3. Navigate to the directory that contains the installer file.
  4. Enter the command: tar -jxvf file to extract the tar.bz2 file, where file is the name of the installer file. For example, tar -jxvf iss-iot-linux.tar.bz2. The command to enter may vary slightly depending on the name of your installer file.

Install on Mac* OS X*

  1. Download the Intel System Studio installer file for Mac* OS X*.
  2. Open a new Terminal window.
  3. Navigate to the directory that contains the installer file.
  4. Enter the command: tar -jxvf file to extract the tar.bz2 file, where file is the name of the installer file. For example, tar -jxvf iss-iot-mac.tar.bz2. The command to enter may vary slightly depending on the name of your installer file.

Note: If you see a message that says "iss-iot-launcher can’t be opened because it is from an unidentified developer", right-click the file and select Open with. Select the Terminal app. In the dialog box that opens, click Open.

Launch Intel® System Studio

  1. Navigate to the directory you extracted the contents of the installer file to.
  2. Launch Intel System Studio as follows:
  • On Windows*, double-click iss-iot-launcher.bat to launch Intel System Studio. 
  • On Linux*, run iss-iot-launcher.sh.
  • On Mac* OS X*, run iss-iot-launcher.

Note: Using the iss-iot-launcher file (instead of the Intel® System Studio executable) will launch Intel System Studio with all the necessary environment settings. Use the iss-iot-launcher file to launch Intel® System Studio every time.

Install Microsoft* Azure* components

The Azure* cloud maintains information about product inventory on intelligent vending machines in the network, keeps track of the events received from vending machines, and could provide future functionality to analyze this data and trigger responses to various conditions (e.g., low inventory or mechanical failure).

Implement the Azure* C++ API

Connecting to Azure* using the C++ API requires compilation of all the following libraries to build the Casablanca project:

Create a web app in Azure* 

Compile Boost

wget -O  boost_1_58_0.tar.gz  ‘http://downloads.sourceforge.net/project/boost/boost/1.58.0/boost_1_58_0.tar.gz?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Fboost%2Ffiles%2Fboost%2F1.58.0%2F&ts=1438814166&use_mirror=iweb’
tar czvf boost_1_58_0.tar.gz
cd boost_1_58_0
./bootstrap.sh
./b2

Compile Casablanca

Clone Casablanca:

git clone https://git.codeplex.com/casablanca

https://github.com/Microsoft/cpprestsdk

Compile Casablanca:

https://casablanca.codeplex.com/wikipage?title=Setup%20and%20Build%20on%20Linux&referringTitle=Documentation
git clone https://github.com/Azure/azure-storage-cpp.git

https://github.com/Azure/azure-storage-cpp.git

SQLite3 installation and table initialization

SQLite3 IPK package installation:

root@galileo:~# opkg install sqlite3
Installing sqlite3 (3:3.8.6.0-r0.0) on root.
Downloading http://iotdk.intel.com/repos/1.5/iotdk/i586/sqlite3_3.8.6.0-r0.0_i586.ipk.
Configuring sqlite3.

Products database creation and initialization:

root@galileo:~# sqlite3 Vending_Prototype/products.sqlite3
SQLite version 3.8.6 2014-08-15 11:46:33
Enter ".help" for usage hints.
sqlite> create table products(name varchar(255) primary key, price smallint, quantity smallint);
sqlite> insert into products values('Coke',150,2);
sqlite> insert into products values('Pepsi',130,3);

Events database creation and initialization:

root@galileo:~# sqlite3 Vending_Prototype/events.sqlite3
SQLite version 3.8.6 2014-08-15 11:46:33
Enter ".help" for usage hints.
sqlite>
sqlite> create table events(time INT, type smallint, key varchar(255), value smallint);

SQLite3 Node.js package installation:

npm install sqlite3

Azure* installation

npm install azure-storage

Conclusion

As this how-to document demonstrates, IoT developers can build prototypes with gateway, administrative, mobile, and cloud software functionality at relatively low cost and without specialized skill sets. Using the Grove* IoT Commercial Developer Kit and an Arduino* 101 board, project teams can conduct rapid prototyping to test the viability of IoT concepts as part of the larger path-to-product process.

Visit GitHub for this project's latest code samples and documentation.

What's New? OpenCL™ Runtime 16.1.1 (CPU only)

$
0
0

16.1.1 release update includes:

  • Fix for the known incompatibility issue with the CPU Kernel Debugger from the Intel® SDK for OpenCL™ Applications 2016 R2 and the CPU only runtime package version 16.1.
  • Performance optimizations:
    • Compiler vectorizer heuristic tuning for a set of workloads
    • Workgroup fusion optimization improvements
    • Performance enhancements of the vload()/vstore() built-in functions
  • Fix for the issue reported on the forum (https://software.intel.com/en-us/comment/1844607#comment-1844607): vectorizer produces incorrect code on SSE42 architectures when using the samplerless read_imagef() built-in function with image2d_t and int2 coordinates as arguments.
  • cl_khr_gl_sharing extension was disabled due to incompatibility with the Microsoft* Basic Display Adapter. To use this extension, please install OpenCL Driver for Intel® Iris™ Graphics and HD Graphics for Windows* OS from https://software.intel.com/en-us/articles/opencl-drivers#iris. The driver package includes the OpenCL Runtime package for CPUs.
  • Due to performance bug Threading Building Blocks (TBB) library was downgraded from 4.2,Interface version 7001, Oct 2 2013" to 4.2, Interface version 7005 , Jun 1 2014

 


Case Study- A Very Good Year: Sommely and Intel® IoT Technology Power Smart Inventory Management for Wine Collectors

$
0
0

Asset tracking is a challenge that seems tailor-made for an Internet of Things solution. Imagine sensors connected to a tiny IoT processor that can be mounted on shipping crates, or on individual objects lining the shelves of a retail operation or private home. The sensors collect data–location, temperature, movement, sales or consumption details, for example–and pass it into the cloud where that data can be sliced, diced, and fed into a recommendation engine. Tapping into the engine then helps someone make an informed decision based on what’s trending, under what circumstances the product will be consumed, and what to expect when they try it for themselves. 

Intel® had been exploring that premise, looking at ways in which Intel® Edison™ technology, Intel® Quark™ processors and microcontrollers could be used to prototype just such a system.

Recognizing that a Portland, Oregon-based design company was focusing on the same opportunity, Intel reached out to Uncorked Studios. “We brought them to an invitation-only development day,” Intel IoT innovation manager Shashi Jain explained. “Our goal was very specific. We wanted to collaborate with them, to demonstrate how Intel IoT technology can be used to quickly take an idea from concept to prototype, and ultimately to market.”

A connoisseur’s collection fitted with Sommely caps.

For Uncorked Studios, that idea was a product called Sommely (from sommelier, or wine steward). With Sommely, the luxury wine market served as the vector for proving that a smart asset-tracking system for wine enthusiasts could scale from the domestic wine rack to restaurants, wineries, and beyond.

Uncorked Studios

The aptly named Uncorked Studios is a product design and development company, which, for the six years of its existence, has been focused on “the relationship between digital environments and the physical world.” The company’s team of 42 has created smart-home and wearable products in collaboration with LEGO*, Google*, adidas*, and Samsung*, among others. Working with Intel, Uncorked helped design the multi-camera array used in theIntel® RealSense™ launch.

Sommely is Uncorked Studios’ first foray into the wine and asset-tracking market.

The Sommely Experience

As a smart inventory-management system for wine collectors, Sommely uses several Intel IoT components to keep a running count of what’s in a particular wine collection. The system can also make smart recommendations by drawing on crowd-sourced data to suggest what to drink, when a bottle is ready to drink, and the food it pairs well with.

To accomplish that, a mobile-friendly website communicates with a gateway or a hub that’s connected to the Internet via WiFi. The hub also communicates with individual caps fitted to each of the bottles in a wine collection. The caps hold batteries, a radio that talks to the hub, sensors, and LED lights.

“Say it’s a Tuesday night, and you’ve just ordered pizza,” Marcelino Alvarez, the founder and CEO of Uncorked Studios, explained. “You don’t want to accidentally open a bottle that’s rare, or too expensive. You just want something that pairs well with your pepperoni and mushroom pizza, so you ask the app and the corresponding caps light up, letting you do a visual search through your inventory.”

Along similar lines, the Sommely app gives users the ability to choose wines for special occasions, based on criteria that includes the food the wine is being paired with, the characteristic elements you’re looking for (light and crisp, big and jammy, etc.), and whether a particular bottle is ready to be enjoyed now or if it needs more time in the cellar. The system will also warn when a bottle is past its prime.

The crowd-sourced component, like Sommely itself, is a work-in-progress. Alvarez’s goal is for their app to “play nice with other apps”. “We’re not competing with apps like Delectable Wines or Vivino, so we could partner with them.” He envisions getting data that includes a large sampling of what wine lovers are drinking at any given moment, along with data such as industry and consumer reviews and tasting notes. Sommely could aggregate and present that information in ways that encourage curiosity and exploration.

What does Sommely have that other systems don’t? Traditional, analog inventory systems that use things like spreadsheets and post-it notes, or tags that hang around the neck of each bottle, have drawbacks. “For example, they require a lot of individual attention,” Alvarez pointed out. “You need to write down drink-by dates, assuming you even have that information.” Another disadvantage is that manual systems don’t automatically update when you drink a bottle, making it easy to lose track of what’s in the collection.

“Even if you’re diligent about keeping track, you’re inevitably going to forget you drank something. A single misplaced bottle can send you back to square one. If you have 40, 50, or 100 bottles, that might only take you an hour,” Alvarez said. “But if you’ve got 500 bottles, or 1,000 bottles, there are other considerations. For example, sizable collections are more likely to include special bottles–gifts, rare and expensive vintages, and so on.”

How Sommely Works

A low-power, 32-bit  Intel® Quark™ microcontroller D1000 resides in each Sommely bottle cap. Uncorked Studios’ engineers leaned heavily on the fine-grained power management features of the Intel Quark microcontroller D1000. Standby mode and fast wake-times helped maximize battery life, which was crucial for a ‘set it and forget it’ solution expected to function for long periods. 

Low-power Intel Quark microcontroller D1000s are tiny enough to fit in a bottle cap. 

The gateway itself uses an Intel NUC and a full Ubuntu* Linux* distribution that enables remote system updates. Currently, the caps and gateway communicate using Bluetooth® Low Energy (BLE) technology. BLE is tailored for low-energy IoT devices. “We decided on BLE for a couple of reasons,” Alvarez said. “When prototyping Sommely, we piggybacked some of our engineering efforts with those of a software engineering team at Intel that was working on a BLE solution. That helped us get from concept to prototype much faster than expected.”

As convenient as BLE seemed, Alvarez doesn’t expect it to be part of the final product. Uncorked Studios is still working with Intel to define an active RF or RFID solution, because the initial Sommely prototype showed that scaling beyond 100 bottles to 500 or more could prove challenging. “With a larger collection, BLE interference could make it challenging to pair everything, so we’ll probably build a custom RF stack for Sommely.”

The gateway’s Intel NUC enables discovery of the BLE caps and maintains their connection state over time by periodically scanning and re-scanning, as well as taking Received Signal Strength Indication (RSSI) measurements. The browser-based system interface draws on a library of collection and search queries, and the design provides an upstream connection to WiFi. The Uncorked Studios team has been using Intel-powered Dell* Venue tablets to test-drive Sommely.

Frictionless User Interaction

A lot of technology has gone into Sommely, and, Alvarez says, “we’re looking at ways to create frictionless interaction with the system. We want to keep it simple.” According to Alvarez, the physical components, the context in which they’re being used, and even the BIOS all posed design challenges.

For starters, the gateway needed to blend in. After all, wine enthusiasts want the focus of attention to be on the bottles in their collection, not on a high-tech telecommunications device. As a result, the Intel Quark microcontroller D1000 and the battery are hidden within the Sommely bottle caps, and the gateway is housed in an Intel NUC.

In early trials, at runtime the Dell Venue tablet appears to interact directly with the bottle caps themselves. “That’s because we limited the perceptual distance from the transaction between the tablet, the caps, and the hub. The end user won’t know that we're running Linux, a full BLE stack, and a WiFi network stack with a node.js web server. It feels like you go to the system, press a button, and see LEDs light up.”

The Sommely bottle caps supply user feedback through a ring of three LEDs. That presented the UX team with another challenge: developing an intuitive user experience for a device that doesn’t have the traditional elements of a user interface–there’s no screen to display things on other than what’s available through the web app.

Inside each Sommely bottle cap sits three LEDs positioned 120 degrees apart. They change color to give visual feedback. 

“One of the interesting things that emerged with IoT is the trend of having a mobile app that acts like an interface to a device that itself provides user feedback on a more intimate level,” Alvarez said. “That holds true for Sommely. Our web app is just a way for people to control an interface to wine bottles, where the bottles themselves become unique elements of that interface. The Sommely caps fit over the actual wine bottle caps, and respond to input and output from the web app. You can pick up a bottle, tap a button on the Sommely cap, and get feedback through them, as well as display relevant information on the app.”

Sommely bottle caps deliver feedback by lighting up.

UX design decisions were based on context. “Where the bottles would be, relative to the tablet; what happens when you’re out of sight,” Alvarez said. “How do you make the app for the interface something that gets out of the way rather than being in your face? You don't want to be scrolling through an infinite series of menus to just find one wine bottle.”

All of that translated to a solution in which the Intel Quark microcontroller D1000 embedded in each cap responds to user push-button inputs. “We implemented a custom ping-pong soft PWM based on the sample code provided with the Intel SDK to drive the LED ring,” Alvarez explained. “The BLE client state machine converts inbound BLE requests to color, brightness, and off/on. Responses to button pushes can be customized based on a variety of queries. If a user asks to see wines ranging from expensive to less expensive, the color palette spans green to yellow, but for a ‘is it ready to drink right now?’ query, the feedback is a more nuanced color palette.”

Batteries - Maximizing Performance

Battery life was crucial, so the system enters power-saving states whenever possible. “We exploited the halt and standby states in our firmware to put the SoC into power-saving mode whenever it's not needed. The MCU is idle during dead clocks in our PWM states, and whenever the LED ring is off. As a result, we've got the Intel Quark D1000 in standby almost all of the time, which translates to battery life approaching one year with some intermittent use, which is our current method.”

The web app notifies users when batteries need to be replaced, but Alvarez cautions against using Sommely caps to run DJ-style light shows. “There will be people who do that, but we don’t want to encourage it. We’d like to implement conductive charging or an RF harvest state. With enough harvesting, we might be able to extend battery life by a couple years. But those are v2 or v3 features.”

Sommely lets users make smart choices by drawing data from the cloud. The app’s backend is hosted on Amazon Web Services. 

With a target audience that holds on to wine for anywhere from a year to three years, Alvarez knows how important extended battery life is. “Cost is the primary issue holding us back. We’re looking at ways to gracefully send new caps to customers as batteries near end-of-life.”

Some wine enthusiasts believe it’s important to annually rotate the bottles in their collections a quarter-turn. Alvarez says they considered using the accelerometer in their sensors and the light ring in the bottle caps to tell users when to turn bottles. “The three LEDs are 120 degrees apart, so we could have lit an LED to signal when it’s time to rotate a bottle, and then turned the LED off when the quarter turn was complete.”

Intel Inside the Software Stack

The Uncorked Studios engineers turned to a number of Intel software development tools when coding Sommely. The gateway runs a custom MEAN stack implementation and Intel® System Studio for Microcontrollers was used to code the Intel Quark microcontroller D1000 firmware.

“Having sample code for the Intel Quark D1000 firmware made a huge difference for us when it came to some of the trickier portions of the system, power management specifically,” Alvarez said.

As a result of the relationships established during the invitation-only developer day, Intel hardware and software engineers helped the Uncorked Studios team with the Intel Quark D1000 firmware. “We were able to get up and running on the Edison NUC very quickly,” Alvarez said. “With the Intel Quark D1000, Intel engineers were able to look at our code and give us input that helped us resolve issues. When we’ve had challenges or questions, they’ve been right there for us.”

More importantly, Alvarez said, “Intel’s Marc Alexander was an advocate. He saw the potential in Sommely from a broader perspective. He saw beyond the immediate implication of how many chips they might sell, and understood that Sommely was an exciting IoT use case with a lot of potential. If we didn't have believers within Intel, or access to domain experts within various aspects of IoT and the startup world, we wouldn't be where we are today.”

Next Steps

Sommely is a work-in-progress. At the time of writing, enhancements to its power management stack are top-of-mind for Alvarez. “If we could harvest energy and not have to replace batteries, that would be a leaps-and-bounds improvement. I think power management is a challenge all IoT devices are going to face at some point.”

Alvarez envisions solutions ranging from the utilitarian to the glamorous. “We have a few ideas for how we could recharge the caps using either existing induction charging technologies as well as some new approaches that we’re excited to prototype. We think it’s a space where we can continue to partner with Intel to solve a much larger industry challenge.”

In Summary

High-value inventory management—keeping track of valuable things—using IoT technology offers solutions to a lot of challenges. For example, tracking tools in a factory, or keeping tabs on a collection of Star Wars figures, Barbie dolls, antiques, and other rare collectibles being shipped across town or across borders, and monitoring the day-to-day life of a wine collection.

Uncorked Studios was in lean, startup mode with Sommely when they started working with Intel IoT technology. Intel hardware and software engineers helped the Uncorked team quickly prototype the smart asset-management solution. The lessons they’ve learned so far, and possible new use cases, hold great potential, especially when scaled to support larger inventories.

“We've looked at different contextual experiences that go beyond one-cap-one-bottle,” Alvarez said. “What could a restaurant or winery do with Sommely? Is there a SKU that just sits on top of a case that holds 12 individual units representing the 12 bottles within? Maybe those units detach and go on the bottles when you crack open the case? We also considered transferring ownership of a bottle and keeping that history in the cap itself. It’d be like an electronic manifest for the bottle. We have thought through a number of scenarios, but aren’t ready to implement them.”

Whatever the future holds for Sommely, it shows how Intel fosters innovation through collaboration with startups. By helping Uncorked Studios overcome technical hurdles and scale Sommely from one to many sensors/caps, Intel gained valuable insights. Those insights, in turn, helped the Intel team refine and enhance Intel’s IoT hardware and software solutions.

If you’ve got a burning desire to change the world with IoT technology, but you’re having technical difficulties, drop by the Intel Developer Zone. Our domain experts, and the developer community at large, might be able to lend a hand.

 

Intel® XDK FAQs - Cordova

$
0
0

How do I set app orientation?

You set the orientation under the Build Settings section of the Projects tab.

To control the orientation of an iPad you may need to create a simply plugin that contains a single plugin.xml file like the following:

<config-file target="*-Info.plist" parent="UISupportedInterfaceOrientations~ipad" overwrite="true"><string></string></config-file><config-file target="*-Info.plist" parent="UISupportedInterfaceOrientations~ipad" overwrite="true"><array><string>UIInterfaceOrientationPortrait</string></array></config-file> 

Then add the plugin as a local plugin using the plugin manager on the Projects tab.

HINT: to import the plugin.xml file you created above, you must select the folder that contains the plugin.xml file; you cannot select the plugin.xml file itself, using the import dialg, because a typical plugin consists of many files, not a single plugin.xml. The plugin you created based on the instructions above only requires a single file, it is an atypical plugin.

Alternatively, you can use this plugin: https://github.com/yoik/cordova-yoik-screenorientation. Import it as a third-party Cordova* plugin using the plugin manager with the following information:

  • cordova-plugin-screen-orientation
  • specify a version (e.g. 1.4.0) or leave blank for the "latest" version

Or, you can reference it directly from its GitHub repo:

To use the screen orientation plugin referenced above you must add some JavaScript code to your app to manipulate the additional JavaScript API that is provided by this plugin. Simply adding the plugin will not automatically fix your orientation, you must add some code to your app that takes care of this. See the plugin's GitHub repo for details on how to use that API.

Is it possible to create a background service using Intel XDK?

Background services require the use of specialized Cordova* plugins that need to be created specifically for your needs. Intel XDK does not support development or debug of plugins, only the use of them as "black boxes" with your HTML5 app. Background services can be accomplished using Java on Android or Objective C on iOS. If a plugin that backgrounds the functions required already exists (for example, this plugin for background geo tracking), Intel XDK's build system will work with it.

How do I send an email from my App?

You can use the Cordova* email plugin or use web intent - PhoneGap* and Cordova* 3.X.

How do you create an offline application?

You can use the technique described here by creating an offline.appcache file and then setting it up to store the files that are needed to run the program offline. Note that offline applications need to be built using the Cordova* or Legacy Hybrid build options.

How do I work with alarms and timed notifications?

Unfortunately, alarms and notifications are advanced subjects that require a background service. This cannot be implemented in HTML5 and can only be done in native code by using a plugin. Background services require the use of specialized Cordova* plugins that need to be created specifically for your needs. Intel XDK does not support the development or debug of plugins, only the use of them as "black boxes" with your HTML5 app. Background services can be accomplished using Java on Android or Objective C on iOS. If a plugin that backgrounds the functions required already exists (for example, this plugin for background geo tracking) the Intel XDK's build system will work with it.

How do I get a reliable device ID?

You can use the Phonegap/Cordova* Unique Device ID (UUID) plugin for Android*, iOS* and Windows* Phone 8.

How do I implement In-App purchasing in my app?

There is a Cordova* plugin for this. A tutorial on its implementation can be found here. There is also a sample in Intel XDK called 'In App Purchase' which can be downloaded here.

How do I install custom fonts on devices?

Fonts can be considered as an asset that is included with your app, not shared among other apps on the device just like images and CSS files that are private to the app and not shared. It is possible to share some files between apps using, for example, the SD card space on an Android* device. If you include the font files as assets in your application then there is no download time to consider. They are part of your app and already exist on the device after installation.

How do I access the device's file storage?

You can use HTML5 local storage and this is a good article to get started with. Alternatively, there is aCordova* file plugin for that.

Why isn't AppMobi* push notification services working?

This seems to be an issue on AppMobi's end and can only be addressed by them. PushMobi is only available in the "legacy" container. AppMobi* has not developed a Cordova* plugin, so it cannot be used in the Cordova* build containers. Thus, it is not available with the default build system. We recommend that you consider using the Cordova* push notification plugin instead.

How do I configure an app to run as a service when it is closed?

If you want a service to run in the background you'll have to write a service, either by creating a custom plugin or writing a separate service using standard Android* development tools. The Cordova* system does not facilitate writing services.

How do I dynamically play videos in my app?

  1. Download the Javascript and CSS files from https://github.com/videojs and include them in your project file.
  2. Add references to them into your index.html file.
  3. Add a panel 'main1' that will be playing the video. This panel will be launched when the user clicks on the video in the main panel.

     
    <div class="panel" id="main1" data-appbuilder-object="panel" style=""><video id="example_video_1" class="video-js vjs-default-skin" controls="controls" preload="auto" width="200" poster="camera.png" data-setup="{}"><source src="JAIL.mp4" type="video/mp4"><p class="vjs-no-js">To view this video please enable JavaScript*, and consider upgrading to a web browser that <a href=http://videojs.com/html5-video-support/ target="_blank">supports HTML5 video</a></p></video><a onclick="runVid3()" href="#" class="button" data-appbuilder-object="button">Back</a></div>
  4. When the user clicks on the video, the click event sets the 'src' attribute of the video element to what the user wants to watch.

     
    Function runVid2(){
          Document.getElementsByTagName("video")[0].setAttribute("src","appdes.mp4");
          $.ui.loadContent("#main1",true,false,"pop");
    }
  5. The 'main1' panel opens waiting for the user to click the play button.

NOTE: The video does not play in the emulator and so you will have to test using a real device. The user also has to stop the video using the video controls. Clicking on the back button results in the video playing in the background.

How do I design my Cordova* built Android* app for tablets?

This page lists a set of guidelines to follow to make your app of tablet quality. If your app fulfills the criteria for tablet app quality, it can be featured in Google* Play's "Designed for tablets" section.

How do I resolve icon related issues with Cordova* CLI build system?

Ensure icon sizes are properly specified in the intelxdk.config.additions.xml. For example, if you are targeting iOS 6, you need to manually specify the icons sizes that iOS* 6 uses.

<icon platform="ios" src="images/ios/72x72.icon.png" width="72" height="72" /><icon platform="ios" src="images/ios/57x57.icon.png" width="57" height="57" />

These are not required in the build system and so you will have to include them in the additions file.

For more information on adding build options using intelxdk.config.additions.xml, visit: /en-us/html5/articles/adding-special-build-options-to-your-xdk-cordova-app-with-the-intelxdk-config-additions-xml-file

Is there a plugin I can use in my App to share content on social media?

Yes, you can use the PhoneGap Social Sharing plugin for Android*, iOS* and Windows* Phone.

Iframe does not load in my app. Is there an alternative?

Yes, you can use the inAppBrowser plugin instead.

Why are intel.xdk.istablet and intel.xdk.isphone not working?

Those properties are quite old and is based on the legacy AppMobi* system. An alternative is to detect the viewport size instead. You can get the user's screen size using screen.width and screen.height properties (refer to this article for more information) and control the actual view of the webview by using the viewport meta tag (this page has several examples). You can also look through this forum thread for a detailed discussion on the same.

How do I enable security in my app?

We recommend using the App Security API. App Security API is a collection of JavaScript API for Hybrid HTML5 application developers. It enables developers, even those who are not security experts, to take advantage of the security properties and capabilities supported by the platform. The API collection is available to developers in the form of a Cordova plugin (JavaScript API and middleware), supported on the following operating systems: Windows, Android & iOS.
For more details please visit: https://software.intel.com/en-us/app-security-api.

For enabling it, please select the App Security plugin on the plugins list of the Project tab and build your app as a Cordova Hybrid app. After adding the plugin, you can start using it simply by calling its API. For more details about how to get started with the App Security API plugin, please see the relevant sample app articles at: https://software.intel.com/en-us/xdk/article/my-private-photos-sample and https://software.intel.com/en-us/xdk/article/my-private-notes-sample.

Why does my build fail with Admob plugins? Is there an alternative?

Intel XDK does not support the library project that has been newly introduced in the com.google.playservices@21.0.0 plugin. Admob plugins are dependent on "com.google.playservices", which adds Google* play services jar to project. The "com.google.playservices@19.0.0" is a simple jar file that works quite well but the "com.google.playservices@21.0.0" is using a new feature to include a whole library project. It works if built locally with Cordova CLI, but fails when using Intel XDK.

To keep compatible with Intel XDK, the dependency of admob plugin should be changed to "com.google.playservices@19.0.0".

Why does the intel.xdk.camera plugin fail? Is there an alternative?

There seem to be some general issues with the camera plugin on iOS*. An alternative is to use the Cordova camera plugin, instead and change the version to 0.3.3.

How do I resolve Geolocation issues with Cordova?

Give this app a try, it contains lots of useful comments and console log messages. However, use Cordova 0.3.10 version of the geo plugin instead of the Intel XDK geo plugin. Intel XDK buttons on the sample app will not work in a built app because the Intel XDK geo plugin is not included. However, they will partially work in the Emulator and Debug. If you test it on a real device, without the Intel XDK geo plugin selected, you should be able to see what is working and what is not on your device. There is a problem with the Intel XDK geo plugin. It cannot be used in the same build with the Cordova geo plugin. Do not use the Intel XDK geo plugin as it will be discontinued.

Geo fine might not work because of the following reasons:

  1. Your device does not have a GPS chip
  2. It is taking a long time to get a GPS lock (if you are indoors)
  3. The GPS on your device has been disabled in the settings

Geo coarse is the safest bet to quickly get an initial reading. It will get a reading based on a variety of inputs, but is usually not as accurate as geo fine but generally accurate enough to know what town you are located in and your approximate location in that town. Geo coarse will also prime the geo cache so there is something to read when you try to get a geo fine reading. Ensure your code can handle situations where you might not be getting any geo data as there is no guarantee you'll be able to get a geo fine reading at all or in a reasonable period of time. Success with geo fine is highly dependent on a lot of parameters that are typically outside of your control.

Is there an equivalent Cordova* plugin for intel.xdk.player.playPodcast? If so, how can I use it?

Yes, there is and you can find the one that best fits the bill from the Cordova* plugin registry.

To make this work you will need to do the following:

  • Detect your platform (you can use uaparser.js or you can do it yourself by inspecting the user agent string)
  • Include the plugin only on the Android* platform and use <video> on iOS*.
  • Create conditional code to do what is appropriate for the platform detected

You can force a plugin to be part of an Android* build by adding it manually into the additions file. To see what the basic directives are to include a plugin manually:

  1. Include it using the "import plugin" dialog, perform a build and inspect the resulting intelxdk.config.android.xml file.
  2. Then remove it from your Project tab settings, copy the directive from that config file and paste it into the intelxdk.config.additions.xml file. Prefix that directive with <!-- +Android* -->.

More information is available here and this is what an additions file can look like:

<preference name="debuggable" value="true" /><preference name="StatusBarOverlaysWebView" value="false" /><preference name="StatusBarBackgroundColor" value="#000000" /><preference name="StatusBarStyle" value="lightcontent" /><!-- -iOS* --><intelxdk:plugin intelxdk:value="nl.nielsad.cordova.wifiscanner" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="nl.nielsad.cordova.wifiscanner" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="org.apache.cordova.statusbar" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="https://github.com/EddyVerbruggen/Flashlight-PhoneGap-Plugin" />

This sample forces a plugin included with the "import plugin" dialog to be excluded from the platforms shown. You can include it only in the Android* platform by using conditional code and one or more appropriate plugins.

How do I display a webpage in my app without leaving my app?

The most effective way to do so is by using inAppBrowser.

Does Cordova* media have callbacks in the emulator?

While Cordova* media objects have proper callbacks when using the debug tab on a device, the emulator doesn't report state changes back to the Media object. This functionality has not been implemented yet. Under emulation, the Media object is implemented by creating an <audio> tag in the program under test. The <audio> tag emits a bunch of events, and these could be captured and turned into status callbacks on the Media object.

Why does the Cordova version number not match the Projects tab's Build Settings CLI version number, the Emulate tab, App Preview and my built app?

This is due to the difficulty in keeping different components in sync and is compounded by the version numbering convention that the Cordova project uses to distinguish build tool versions (the CLI version) from platform versions (the Cordova target-specific framework version) and plugin versions.

The CLI version you specify in the Projects tab's Build Settings section is the "Cordova CLI" version that the build system uses to build your app. Each version of the Cordova CLI tools come with a set of "pinned" Cordova platform framework versions, which are tied to the target platform.

NOTE: the specific Cordova platform framework versions shown below are subject to change without notice.

Our Cordova CLI 4.1.2 build system was "pinned" to: 

  • cordova-android@3.6.4 (Android Cordova platform version 3.6.4)
  • cordova-ios@3.7.0 (iOS Cordova platform version 3.7.0)
  • cordova-windows@3.7.0 (Cordova Windows platform version 3.7.0)

Our Cordova CLI 5.1.1 build system is "pinned" to:

  • cordova-android@4.1.1 (as of March 23, 2016)
  • cordova-ios@3.8.0
  • cordova-windows@4.0.0

Our Cordova CLI 5.4.1 build system is "pinned" to: 

  • cordova-android@5.0.0
  • cordova-ios@4.0.1
  • cordova-windows@4.3.1

Our Cordova CLI 6.2.0 build system is "pinned" to: 

  • cordova-android@5.1.1
  • cordova-ios@4.1.1
  • cordova-windows@4.3.2

Our CLI 6.2.0 build system is nearly identical to a standard Cordova CLI 6.2.0 installation. A standard 6.2.0 installation differs slightly from our build system because it specifies the cordova-io@4.1.0 and cordova-windows@4.3.1 platform versions There are no differences in the cordova-android platform versions. 

Our CLI 5.4.1 build system really should be called "CLI 5.4.1+" because the platform versions it uses are closer to the "pinned" versions in the Cordova CLI 6.0.0 release than those "pinned" in the original CLI 5.4.1 release.

Our CLI 5.1.1 build system has been deprecated, as of August 2, 2016 and will be retired with an upcoming fall, 2016 release of the Intel XDK. It is highly recommended that you upgrade your apps to build with Cordova CLI 6.2.0, ASAP.

The Cordova platform framework version you get when you build an app does not equal the CLI version number in the Build Settings section of the Projects tab; it equals the Cordova platform framework version that is "pinned" to our build system's CLI version (see the list of pinned versions, above).

Technically, the target-specific Cordova platform frameworks can be updated [independently] for a given version of CLI tools. In some cases, our build system may use a Cordova platform version that is later than the version that was "pinned" to that version of the CLI when it was originally released by the Cordova project (that is, the Cordova platform versions originally specified by the Cordova CLI x.y.z links above).

You may see Cordova platform version differences in the Simulate tab, App Preview and your built app due to:

  • The Simulate tab uses one specific Cordova framework version. We try to make sure that the version of the Cordova platform it uses closely matches the current default Intel XDK version of Cordova CLI.

  • App Preview is released independently of the Intel XDK and, therefore, may use a different platform version than what you will see reported by the Simulate tab or your built app. Again, we try to release App Preview so it matches the version of the Cordova framework that is considered to be the default version for the Intel XDK at the time App Preview is released; but since the various tools are not always released in perfect sync, that is not always possible.

  • Your app is built with a "pinned" Cordova platform version, which is determined by the Cordova CLI version you specified in the Projects tab's Build Settings section. There are always at least two different CLI versions available in the Intel XDK build system.

  • For those versions of Crosswalk that were built with the Intel XDK CLI 4.1.2 build system, the cordova-android framework version was determined by the Crosswalk project, not by the Intel XDK build system.

  • When building an Android-Crosswalk app with Intel XDK CLI 5.1.1 and later, the cordova-android framework version equals the "pinned" cordova-android platform version for that CLI version (see lists above).

Do these Cordova platform framework version numbers matter? Occasionally, yes, but normally, not that much. There are some issues that come up that are related to the Cordova platform version, but they tend to be rare. The majority of the bugs and compatibility issues you will experience in your app have more to do with the versions and mix of Cordova plugins you choose to use and the HTML5 webview runtime on your test devices. See When is an HTML5 Web App a WebView App? for more details about what a webview is and how the webview affects your app.

The "default version" of CLI that the Intel XDK build system uses is rarely the most recent version of the Cordova CLI tools distributed by the Cordova project. There is always a lag between Cordova project releases and our ability to incorporate those releases into our build system and other Intel XDK components. In addition, we are not able to provide every CLI release that is made available by the Cordova project.

How do I add a third party plugin?

Please follow the instructions on this doc page to add a third-party plugin: Adding Plugins to Your Intel® XDK Cordova* App -- this plugin is not being included as part of your app. You will see it in the build log if it was successfully added to your build.

How do I make an AJAX call that works in my browser work in my app?

Please follow the instructions in this article: Cordova CLI 4.1.2 Domain Whitelisting with Intel XDK for AJAX and Launching External Apps.

I get an "intel is not defined" error, but my app works in Test tab, App Preview and Debug tab. What's wrong?

When your app runs in the Test tab, App Preview or the Debug tab the intel.xdk and core Cordova functions are automatically included for easy debug. That is, the plugins required to implement those APIs on a real device are already included in the corresponding debug modules.

When you build your app you must include the plugins that correspond to the APIs you are using in your build settings. This means you must enable the Cordova and/or XDK plugins that correspond to the APIs you are using. Go to the Projects tab and insure that the plugins you need are selected in your project's plugin settings. See Adding Plugins to Your Intel® XDK Cordova* App for additional details.

How do I target my app for use only on an iPad or only on an iPhone?

There is an undocumented feature in Cordova that should help you (the Cordova project provided this feature but failed to document it for the rest of the world). If you use the appropriate preference in theintelxdk.config.additions.xml file you should get what you need:

<preference name="target-device" value="tablet" />     <!-- Installs on iPad, not on iPhone --><preference name="target-device" value="handset" />    <!-- Installs on iPhone, iPad installs in a zoomed view and doesn't fill the entire screen --><preference name="target-device" value="universal" />  <!-- Installs on iPhone and iPad correctly -->

If you need info regarding the additions.xml file, see the blank template or this doc file: Adding Intel® XDK Cordova Build Options Using the Additions File.

Why does my build fail when I try to use the Cordova* Capture Plugin?

The Cordova* Capture plugin has a dependency on the File Plugin. Please make sure you both plugins selected on the projects tab.

How can I pinch and zoom in my Cordova* app?

For now, using the viewport meta tag is the only option to enable pinch and zoom. However, its behavior is unpredictable in different webviews. Testing a few samples apps has led us to believe that this feature is better on Crosswalk for Android. You can test this by building the Hello Cordova sample app for Android and Crosswalk for Android. Pinch and zoom will work on the latter only though they both have:

<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=yes, minimum-scale=1, maximum-scale=2">.

Please visit the following pages to get a better understanding of when to build with Crosswalk for Android:

http://blogs.intel.com/evangelists/2014/09/02/html5-web-app-webview-app/

https://software.intel.com/en-us/xdk/docs/why-use-crosswalk-for-android-builds

Another device oriented approach is to enable it by turning on Android accessibility gestures.

How do I make my Android application use the fullscreen so that the status and navigation bars disappear?

The Cordova* fullscreen plugin can be used to do this. For example, in your initialization code, include this function AndroidFullScreen.immersiveMode(null, null);.

You can get this third-party plugin from here https://github.com/mesmotronic/cordova-fullscreen-plugin

How do I add XXHDPI and XXXHDPI icons to my Android or Crosswalk application?

The Cordova CLI 4.1.2 build system will support this feature, but our 4.1.2 build system (and the 2170 version of the Intel XDK) does not handle the XX and XXX sizes directly. Use this workaround until these sizes are supported directly:

  • copy your XX and XXX icons into your source directory (usually named www)
  • add the following lines to your intelxdk.config.additions.xml file
  • see this Cordova doc page for some more details

Assuming your icons and splash screen images are stored in the "pkg" directory inside your source directory (your source directory is usually named www), add lines similar to these into yourintelxdk.config.additions.xml file (the precise name of your png files may be different than what is shown here):

<!-- for adding xxhdpi and xxxhdpi icons on Android --><icon platform="android" src="pkg/xxhdpi.png" density="xxhdpi" /><icon platform="android" src="pkg/xxxhdpi.png" density="xxxhdpi" /><splash platform="android" src="pkg/splash-port-xhdpi.png" density="port-xhdpi"/><splash platform="android" src="pkg/splash-land-xhdpi.png" density="land-xhdpi"/>

The precise names of your PNG files are not important, but the "density" designations are very important and, of course, the respective resolutions of your PNG files must be consistent with Android requirements. Those density parameters specify the respective "res-drawable-*dpi" directories that will be created in your APK for use by the Android system. NOTE: splash screen references have been added for reference, you do not need to use this technique for splash screens.

You can continue to insert the other icons into your app using the Intel XDK Projects tab.

Which plugin is the best to use with my app?

We are not able to track all the plugins out there, so we generally cannot give you a "this is better than that" evaluation of plugins. Check the Cordova plugin registry to see which plugins are most popular and check Stack Overflow to see which are best supported; also, check the individual plugin repos to see how well the plugin is supported and how frequently it is updated. Since the Cordova platform and the mobile platforms continue to evolve, those that are well-supported are likely to be those that have good activity in their repo.

Keep in mind that the XDK builds Cordova apps, so whichever plugins you find being supported and working best with other Cordova (or PhoneGap) apps would likely be your "best" choice.

See Adding Plugins to Your Intel® XDK Cordova* App for instructions on how to include third-party plugins with your app.

What are the rules for my App ID?

The precise App ID naming rules vary as a function of the target platform (eg., Android, iOS, Windows, etc.). Unfortunately, the App ID naming rules are further restricted by the Apache Cordova project and sometimes change with updates to the Cordova project. The Cordova project is the underlying technology that your Intel XDK app is based upon; when you build an Intel XDK app you are building an Apache Cordova app.

CLI 5.1.1 has more restrictive App ID requirements than previous versions of Apache Cordova (the CLI version refers to Apache Cordova CLI release versions). In this case, the Apache Cordova project decided to set limits on acceptable App IDs to equal the minimum set for all platforms. We hope to eliminate this restriction in a future release of the build system, but for now (as of the 2496 release of the Intel XDK), the current requirements for CLI 5.1.1 are:

  • Each section of the App ID must start with a letter
  • Each section can only consist of letters, numbers, and the underscore character
  • Each section cannot be a Java keyword
  • The App ID must consist of at least 2 sections (each section separated by a period ".").

 

iOS /usr/bin/codesign error: certificate issue for iOS app?

If you are getting an iOS build fail message in your detailed build log that includes a reference to a signing identity error you probably have a bad or inconsistent provisioning file. The "no identity found" message in the build log excerpt, below, means that the provisioning profile does not match the distribution certificate that was uploaded with your application during the build phase.

Signing Identity:     "iPhone Distribution: XXXXXXXXXX LTD (Z2xxxxxx45)"
Provisioning Profile: "MyProvisioningFile"
                      (b5xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxe1)

    /usr/bin/codesign --force --sign 9AxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxA6 --resource-rules=.../MyApp/platforms/ios/build/device/MyApp.app/ResourceRules.plist --entitlements .../MyApp/platforms/ios/build/MyApp.build/Release-iphoneos/MyApp.build/MyApp.app.xcent .../MyApp/platforms/ios/build/device/MyApp.app
9AxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxA6: no identity found
Command /usr/bin/codesign failed with exit code 1

** BUILD FAILED **


The following build commands failed:
    CodeSign build/device/MyApp.app
(1 failure)

The excerpt shown above will appear near the very end of the detailed build log. The unique number patterns in this example have been replaced with "xxxx" strings for security reasons. Your actual build log will contain hexadecimal strings.

iOS Code Sign error: bundle ID does not match app ID?

If you are getting an iOS build fail message in your detailed build log that includes a reference to a "Code Sign error" you may have a bad or inconsistent provisioning file. The "Code Sign" message in the build log excerpt, below, means that the bundle ID you specified in your Apple provisioning profile does not match the app ID you provided to the Intel XDK to upload with your application during the build phase.

Code Sign error: Provisioning profile does not match bundle identifier: The provisioning profile specified in your build settings (MyBuildSettings) has an AppID of my.app.id which does not match your bundle identifier my.bundleidentifier.
CodeSign error: code signing is required for product type 'Application' in SDK 'iOS 8.0'

** BUILD FAILED **

The following build commands failed:
    Check dependencies
(1 failure)
Error code 65 for command: xcodebuild with args: -xcconfig,...

The message above translates into "the bundle ID you entered in the project settings of the XDK does not match the bundle ID (app ID) that you created on Apples developer portal and then used to create a provisioning profile."

iOS build error?

If your iOS build is failing with Error code 65 with Xcodebuild in the error log, most likely there are issues with certificate and provisioning profile. Sometimes Xcode gives specific errors as “Provisioning profile does not match bundle identifier” and other times something like "Code Sign error: No codesigning identities found: No code signing identities". The root of the  issues come from not providing the correct certificate (P12 file) and/or provisioning profile or mismatch between P12 and provisioning profile. You have to make sure your P12 and provisioning profile are correct. The provisioning profile has to be generated using the certificate you used to create the P12 file.  Also, your app ID you provide in XDK build settings has to match the app ID created on the Apple Developer portal and the same App ID has to be used when creating a provisioning profile. 

Please follow these steps to generate the P12 file.

  1. Create a .csr file from Intel XDK (do not close the dialog box to upload .cer file)
  2. Click on the link Apple Developer Portal from the dialog box (do not close the dialog box in XDK)
  3. Upload .csr on Apple Developer Portal
  4. Generate certificate on Apple developer portal
  5. Download .cer file from the Developer portal
  6. Come back to XDK dialog box where you left off from step 1, press Next. Select .cer file that you got from step 5 and generate .P12 file
  7. Create an appID on Apple Developer Portal
  8. Generate a Provisioning Profile on Apple Developer Portal using the certificate you generated in step 4 and appID created in step 7
  9. Provide the same appID (step 7), P12 (step 6) and Provisioning profile (step 8) in Intel XDK Build Settings 

Few things to check before you build:  

  1.  Make sure your certificate has not expired
  2. The appID you created on Apple developer portal matches with the appID you provided in the XDK build settings
  3. You are using  provisioning profile that is associated with the certificate you are using to build the app
  4. Apple allows only 3 active certificate, if you need to create a new one, revoke one of the older certificate and create a new one.

This App Certificate Management video shows how to create a P12 and provisioning profile , the P12 creation part is at 16:45 min. Please follow the process for creating a P12 and generating Provisioning profile as shown in the video. Or follow this Certificate Management document

What are plugin variables used for? Why do I need to supply plugin variables?

Some plugins require details that are specific to your app or your developer account. For example, to authorize your app as an app that belongs to you, the developer, so services can be properly routed to the service provider. The precise reasons are dependent on the specific plugin and its function.

What happened to the Intel XDK "legacy" build options?

On December 14, 2015 the Intel XDK legacy build options were retired and are no longer available to build apps. The legacy build option is based on three year old technology that predates the current Cordova project. All Intel XDK development efforts for the past two years have been directed at building standard Apache Cordova apps.

Many of the intel.xdk legacy APIs that were supported by the legacy build options have been migrated to standard Apache Cordova plugins and published as open source plugins. The API details for these plugins are available in the README.md files in the respective 01.org GitHub repos. Additional details regarding the new Cordova implementations of the intel.xdk legacy APIs are available in the doc page titled Intel XDK Legacy APIs.

Standard Cordova builds do not require the use of the "intelxdk.js" and "xhr.js" phantom scripts. Only the "cordova.js" phantom script is required to successfully build Cordova apps. If you have been including "intelxdk.js" and "xhr.js" in your Cordova builds they have been quietly ignored. You should remove references to these files from your "index.html" file; leaving them in will do no harm, it simply results in a warning that the respective script file cannot be found at runtime.

The Emulate tab will continue to support some legacy intel.xdk APIs that are NOT supported in the Cordova builds (only those intel.xdk APIs that are supported by the open source plugins are available to a Cordova built app, and only if you have included the respective intel.xdk plugins). This Emulate tab discrepancy will be addressed in a future release of the Intel XDK.

More information can be found in this forum post > https://software.intel.com/en-us/forums/intel-xdk/topic/601436.

Which build files do I submit to the Windows Store and which do I use for testing my app on a device?

There are two things you can do with the build files generated by the Intel XDK Windows build options: side-load your app onto a real device (for testing) or publish your app in the Windows Store (for distribution). Microsoft has changed the files you use for these purposes with each release of a new platform. As of December, 2015, the packages you might see in a build, and their uses, are:

  • appx works best for side-loading, and can also be used to publish your app.
  • appxupload is preferred for publishing your app, it will not work for side-loading.
  • appxbundle will work for both publishing and side-loading, but is not preferred.
  • xap is for legacy Windows Phone; works for both publishing and side-loading.

In essence: XAP (WP7) was superseded by APPXBUNDLE (Win8 and WP8.0), which was superseded by APPX (Win8/WP8.1/UAP), which has been supplemented with APPXUPLOAD. APPX and APPXUPLOAD are the preferred formats. For more information regarding these file formats, see Upload app packages on the Microsoft developer site.

Side-loading a Windows Phone app onto a real device, over USB, requires a Windows 8+ development system (see Side-Loading Windows* Phone Apps for complete instructions). If you do not have a physical Windows development machine you can use a virtual Windows machine or use the Window Store Beta testing and targeted distribution technique to get your app onto real test devices.

Side-loading a Windows tablet app onto a Windows 8 or Windows 10 laptop or tablet is simpler. Extract the contents of the ZIP file that you downloaded from the Intel XDK build system, open the "*_Test" folder inside the extracted folder, and run the PowerShell script (ps1 file) contained within that folder on the test machine (the machine that will run your app). The ps1 script file may need to request a "developer certificate" from Microsoft before it will install your test app onto your Windows test system, so your test machine may require a network connection to successfully side-load your Windows app.

The side-loading process may not over-write an existing side-loaded app with the same ID. To be sure your test app properly side-loads, it is best to uninstall the old version of your app before side-loading a new version on your test system.

How do I implement local storage or SQL in my app?

See this summary of local storage options for Cordova apps written by Josh Morony, A Summary of Local Storage Options for PhoneGap Applications.

How do I prevent my app from auto-completing passwords?

Use the Ionic Keyboard plugin and set the spellcheck attribute to false.

Why does my PHP script not run in my Intel XDK Cordova app?

Your XDK app is not a page on a web server; you cannot use dynamic web server techniques because there is no web server associated with your app to which you can pass off PHP scripts and similar actions. When you build an Intel XDK app you are building a standalone Cordova client web app, not a dynamic server web app. You need to create a RESTful API on your server that you can then call from your client (the Intel XDK Cordova app) and pass and return data between the client and server through that RESTful API (usually in the form of a JSON payload).

Please see this StackOverflow post and this article by Ray Camden, a longtime developer of the Cordova development environment and Cordova apps, for some useful background.

Following is a lightly edited recommendation from an Intel XDK user:

I came from php+mysql web development. My first attempt at an Intel XDK Cordova app was to create a set of php files to query the database and give me the JSON. It was a simple job, but totally insecure.

Then I found dreamfactory.com, an open source software that automatically creates the REST API functions from several databases, SQL and NoSQL. I use it a lot. You can start with a free account to develop and test and then install it in your server. Another possibility is phprestsql.sourceforge.net, this is a library that does what I tried to develop by myself. I did not try it, but perhaps it will help you.

And finally, I'm using PouchDB and CouchDB"A database for the web." It is not SQL, but is very useful and easy if you need to develop a mobile app with only a few tables. It will also work with a lot of tables, but for a simple database it is an easy place to start.

I strongly recommend that you start to learn these new ways to interact with databases, you will need to invest some time but is the way to go. Do not try to use MySQL and PHP the old fashioned way, you can get it work but at some point you may get stuck.

Why doesn’t my Cocos2D game work on iOS?

This is an issue with Cocos2D and is not a reflection of our build system. As an interim solution, we have modified the CCBoot.js file for compatibility with iOS and App Preview. You can view an example of this modification in this CCBoot.js file from the Cocos2d-js 3.1 Scene GUI sample. The update has been applied to all cocos2D templates and samples that ship with Intel XDK. 

The fix involves two lines changes (for generic cocos2D fix) and one additional line (for it to work on App Preview on iOS devices):

Generic cocos2D fix -

1. Inside the loadTxt function, xhr.onload should be defined as

xhr.onload = function () {
    if(xhr.readyState == 4)
        xhr.responseText != "" ? cb(null, xhr.responseText) : cb(errInfo);
    };

instead of

xhr.onload = function () {
    if(xhr.readyState == 4)
        xhr.status == 200 ? cb(null, xhr.responseText) : cb(errInfo);
    };

2. The condition inside _loadTxtSync function should be changed to 

if (!xhr.readyState == 4 || (xhr.status != 200 || xhr.responseText != "")) {

instead of 

if (!xhr.readyState == 4 || xhr.status != 200) {

 

App Preview fix -

Add this line inside of loadTxtSync after _xhr.open:

xhr.setRequestHeader("iap_isSyncXHR", "true");

How do I change the alias of my Intel XDK Android keystore certificate?

You cannot change the alias name of your Android keystore within the Intel XDK, but you can download the existing keystore, change the alias on that keystore and upload a new copy of the same keystore with a new alias.

Use the following procedure:

  • Download the converted legacy keystore from the Intel XDK (the one with the bad alias).

  • Locate the keytool app on your system (this assumes that you have a Java runtime installed on your system). On Windows, this is likely to be located at %ProgramFiles%\Java\jre8\bin (you might have to adjust the value of jre8 in the path to match the version of Java installed on your system). On Mac and Linux systems it is probably located in your path (in /usr/bin).

  • Change the alias of the keystore using this command (see the keytool -changealias -help command for additional details):

keytool -changealias -alias "existing-alias" -destalias "new-alias" -keypass keypass -keystore /path/to/keystore -storepass storepass
  • Import this new keystore into the Intel XDK using the "Import Existing Keystore" option in the "Developer Certificates" section of the "person icon" located in the upper right corner of the Intel XDK.

What causes "The connection to the server was unsuccessful. (file:///android_asset/www/index.html)" error?

See this forum thread for some help with this issue. This error is most likely due to errors retrieving assets over the network or long delays associated with retrieving those assets.

How do I manually sign my Android or Crosswalk APK file with the Intel XDK?

To sign an app manually, you must build your app by "deselecting" the "Signed" box in the Build Settings section of the Android tab on the Projects tab:

Follow these Android developer instructions to manually sign your app. The instructions assume you have Java installed on your system (for the jarsigner and keytool utilities). You may have to locate and install the zipalign tool separately (it is not part of Java) or download and install Android Studio.

These two sections of the Android developer Signing Your Applications article are also worth reading:

Why should I avoid using the additions.xml file? Why should I use the Plugin Management Tool in the Intel XDK?

Intel XDK (2496 and up) now includes a Plugin Management Tool that simplifies adding and managing Cordova plugins. We urge all users to manage their plugins from existing or upgraded projects using this tool. If you were using intelxdk.config.additions.xml file to manage plugins in the past, you should remove them and use the Plugin Management Tool to add all plugins instead.

Why you should be using the Plugin Management Tool:

  • It can now manage plugins from all sources. Popular plugins have been added to the the Featured plugins list. Third party plugins can be added from the Cordova Plugin Registry, Git Repo and your file system.

  • Consistency: Unlike previous versions of the Intel XDK, plugins you add are now stored as a part of your project on your development system after they are retrieved by the Intel XDK and copied to your plugins directory. These plugin files are delivered, along with your source code files, to the Intel XDK cloud-based build server. This change ensures greater consistency between builds, because you always build with the plugin version that was retrieved by the Intel XDK into your project. It also provides better documentation of the components that make up your Cordova app, because the plugins are now part of your project directory. This is also more consistent with the way a standard Cordova CLI project works.

  • Convenience: In the past, the only way to add a third party plugin that required parameters was to include it in the intelxdk.config.additions.xml file. This plugin would then be added to your project by the build system. This is no longer recommended. With the new Plugin Management Tool, it automatically parses the plugin.xml file and prompts to add any plugin variables from within the XDK.

    When a plugin is added via the Plugin Management Tool, a plugin entry is added to the project file and the plugin source is downloaded to the plugins directory making a more stable project. After a build, the build system automatically generates config xml files in your project directory that includes a complete summary of plugins and variable values.

  • Correctness of Debug Module: Intel XDK now provides remote on-device debugging for projects with third party plugins by building a custom debug module from your project plugins directory. It does not write or read from the intelxdk.config.additions.xml and the only time this file is used is during a build. This means the debug module is not aware of your plugin added via the intelxdk.config.additions.xml file and so adding plugins via intelxdk.config.additions.xml file should be avoided. Here is a useful article for understanding Intel XDK Build Files.

  • Editing Plugin Sources: There are a few cases where you may want to modify plugin code to fix a bug in a plugin, or add console.log messages to a plugin's sources to help debug your application's interaction with the plugin. To accomplish these goals you can edit the plugin sources in the plugins directory. Your modifications will be uploaded along with your app sources when you build your app using the Intel XDK build server and when a custom debug module is created by the Debug tab.

How do I fix this "unknown error: cannot find plugin.xml" when I try to remove or change a plugin?

Removing a plugin from your project generates the following error:

Sometimes you may see this error:

This is not a common problem, but if it does happen it means a file in your plugin directory is probably corrupt (usually one of the json files found inside the plugins folder at the root of your project folder).

The simplest fix is to:

  • make a list of ALL of your plugins (esp. the plugin ID and version number, see image below)
  • exit the Intel XDK
  • delete the entire plugins directory inside your project
  • restart the Intel XDK

The XDK should detect that all of your plugins are missing and attempt to reinstall them. If it does not automatically re-install all or some of your plugins, then reinstall them manually from the list you saved in step one (see the image below for the important data that documents your plugins).

NOTE: if you re-install your plugins manually, you can use the third-party plugin add feature of the plugin management system to specify the plugin id to get your plugins from the Cordova plugin registry. If you leave the version number blank the latest version of the plugin that is available in the registry will be retrieved by the Intel XDK.

Why do I get a "build failed: the plugin contains gradle scripts" error message?

You will see this error message in your Android build log summary whenever you include a Cordova plugin that includes a gradle script in your project. Gradle scripts add extra Android build instructions that are needed by the plugin.

The current Intel XDK build system does not allow the use of plugins that contain gradle scripts because they present a security risk to the build system and your Intel XDK account. An unscrupulous user could use a gradle-enabled plugin to do harmful things with the build server. We are working on a build system that will insure the necessary level of security to allow for gradle scripts in plugins, but until that time, we cannot support those plugins that include gradle scripts.

The error message in your build summary log will look like the following:

In some cases the plugin gradle script can be removed, but only if you manually modify the plugin to implement whatever the gradle script was doing automatically. In some cases this can be done easily (for example, the gradle script may be building a JAR library file for the plugin), but sometimes the plugin is not easily modified to remove the need for the gradle script. Exactly what needs to be done to the plugin depends on the plugin and the gradle script.

You can find out more about Cordova plugins and gradle scripts by reading this section of the Cordova documentation. In essence, if a Cordova plugin includes a build-extras.gradle file in the plugin's root folder, or if it contains one or more lines similar to the following, inside the plugin.xml file:

<framework src="some.gradle" custom="true" type="gradleReference" />

it means that the plugin contains gradle scripts and will be rejected by the Intel XDK build system.

How does one remove gradle dependencies for plugins that use Google Play Services (esp. push plugins)?

Our Android (and Crosswalk) CLI 5.1.1 and CLI 5.4.1 build systems include a fix for an issue in the standard Cordova build system that allows some Cordova plugins to be used with the Intel XDK build system without their included gradle script!

This fix only works with those Cordova plugins that include a gradle script for one and only one purpose: to set the value of applicationID in the Android build project files (such a gradle script copies the value of the App ID from your project's Build Settings, on the Projects tab, to this special project build variable).

Using the phonegap-plugin-push as an example, this Cordova plugin contains a gradle script named push.gradle, that has been added to the plugin and looks like this:

import java.util.regex.Pattern

def doExtractStringFromManifest(name) {
    def manifestFile = file(android.sourceSets.main.manifest.srcFile)
    def pattern = Pattern.compile(name + "=\"(.*?)\"")
    def matcher = pattern.matcher(manifestFile.getText())
    matcher.find()
    return matcher.group(1)
}

android {
    sourceSets {
        main {
            manifest.srcFile 'AndroidManifest.xml'
        }
    }

    defaultConfig {
        applicationId = doExtractStringFromManifest("package")
    }
}

All this gradle script is doing is inserting your app's "package ID" (the "App ID" in your app's Build Settings) into a variable called applicationID for use by the build system. It is needed, in this example, by the Google Play Services library to insure that calls through the Google Play Services API can be matched to your app. Without the proper App ID the Google Play Services library cannot distinguish between multiple apps on an end user's device that are using the Google Play Services library, for example.

The phonegap-plugin-push is being used as an example for this article. Other Cordova plugins exist that can also be used by applying the same technique (e.g., the pushwoosh-phonegap-plugin will also work using this technique). It is important that you first determine that only one gradle script is being used by the plugin of interest and that this one gradle script is used for only one purpose: to set the applicationID variable.

How does this help you and what do you do?

To use a plugin with the Intel XDK build system that includes a single gradle script designed to set the applicationID variable:

  • Download a ZIP of the plugin version you want to use from that plugin's git repo.

    IMPORTANT:be sure to download a released version of the plugin, the "head" of the git repo may be "under construction" -- some plugin authors make it easy to identify a specific version, some do not, be aware and choose carefully when you clone a git repo! 

  • Unzip that plugin onto your local hard drive.

  • Remove the <framework> line that references the gradle script from the plugin.xml file.

  • Add the modified plugin into your project as a "local" plugin (see the image below).

In this example, you will be prompted to define a variable that the plugin also needs. If you know that variable's name (it's called SENDER_ID for this plugin), you can add it using the "+" icon in the image above, and avoid the prompt. If the plugin add was successful, you'll find something like this in the Projects tab:

If you are curious, you can inspect the AndroidManifest.xml file that is included inside your built APK file (you'll have to use a tool like apktool to extract and reconstruct it from you APK file). You should see something like the following highlighted line, which should match your App ID, in this example, the App ID was io.cordova.hellocordova:

If you see the following App ID, it means something went wrong. This is the default App ID for the Google Play Services library that will cause collisions on end-user devices when multiple apps that are using Google Play Services use this same default App ID:

There is no Entitlements.plist file, how do I add Universal Links to my iOS app?

The Intel XDK project does not provide access to an Entitlements.plist file. If you are using Cordova CLI locally you would have the ability to add such a file into the CLI platform build directories located in the CLI project folder. Because the Intel XDK build system is cloud-based, your Intel XDK project folders do not include these build directories.

A workaround has been identified by an Intel XDK customer (Keith T.) and is detailed in this forum post.

Why do I get a "signed with different certificate" error when I update my Android app in the Google Play Store?

If you submitted an app to the Google Play Store using a version of the Intel XDK prior to version 3088 (prior to March of 2016), you need to use your "converted legacy" certificate when you build your app in order for the Google Play Store to accept an update to your app. The error message you receive will look something like the following:

When using version 3088 (or later) of the Intel XDK, you are given the option to convert your existing Android certificate, that was automatically created for your Android builds with an older version of the Intel XDK, into a certificate for use with the new version of the Intel XDK. This conversion process is a one-time event. After you've successfully converted your "legacy Android certificate" you will never have to do this again.

Please see the following links for more details.

Back to FAQs Main

Coming Soon - the Intel® Quark™ SE Microcontroller C1000 Developer Kit

$
0
0

Coming soon – the Intel® Quark™ SE Microcontroller C1000 Developer Kit.  Based on the Intel® Quark™ SE microcontroller C1000, the developer kit features a small form-factor board, which contains among other things flash storage, a Bluetooth Low Energy (BLE) module with integrated antenna, an 802.15.4 transceiver with on-board antenna and a 6-axis compass/accelerometer with a temperature sensor.  A USB connection enables programming and debugging (JTAG) of the development platform.

Software support comes with the open source Intel® Quark™ Microcontroller Software Interface (Intel® QMSI) board support package featuring all required drivers, sample applications and support with the Zephyr Project* RTOS.  In addition Intel® System Studio for Microcontrollers provides an Eclipse*-based IDE for developing, optimizing, and debugging applications.  Features include the Intel® Compiler for Intel® Quark™ Microcontrollers, GNU* compiler collection (GCC), Intel® Integrated Performance Primitives for Microcontrollers, Intel® QMSI board support package and the Zephyr Project* RTOS.

Stay tuned for updates!

Boost JavaScript* Performance by Exploiting Vectorization using SIMD.js

$
0
0

As JavaScript* applications become more sophisticated, developers are increasingly looking for ways to optimize performance. Single Instruction Multiple Data (SIMD) operations enable you to process multiple data items at the same time when “data-parallelism,” the mutual independence of data, exists. In the past, these operations have been limited to low-level languages and languages that can map closely to the architecture, such as C/C++. Using SIMD.js, these operations are now available to use directly from JavaScript code. This enables JavaScript developers to easily exploit the hardware capabilities of the underlying architecture to significantly improve the performance of code that can benefit from data parallelism. Developers can also easily translate SIMD-optimized algorithms from languages like C/C++ to JavaScript.

In this article, we provide examples of SIMD operations, show you how to enable SIMD.js in Microsoft Edge* and ChakraCore*, and provide tips for writing JavaScript code that will avoid performance cliffs.

Understanding SIMD

Since 1997, Intel has been adding instructions to its processors to perform the same operation on multiple data items in parallel. These SIMD operations can accelerate performance in applications such as rendering calculations, 3D object manipulations, encryption, physics engines, compression/decompression algorithms, and image processing.

As a basic example of how SIMD works, consider adding two arrays so that C[i] = A[i] + B[i] for the entire dataset. A simple solution is to iterate over each pair of elements and perform the addition sequentially. The processor has a SIMD operation, though, that can enable you to perform an addition on multiple independent data chunks at the same time. If you process four data items at the same time, the process could be made up to four times faster. Essentially, the array is divided up into smaller fixed-size arrays, sometimes referred to as vectors.

SIMD operations are commonly used in C/C++, and in certain cases the compiler can automatically vectorize the code. The GCC* and Clang* compilers provide “vector_size” and “ext_vector_type” attributes to auto-vectorize C/C++ code that has data parallelism. The Intel® C++ Compiler and Intel® Fortran Compiler offer the “#pragma SIMD” directive for trying to auto-vectorize loops. Array notation is another Intel-specific language extension in Intel C++ Compilers that enables users to point to the data parallel code that the compiler should try to auto-vectorize. In other cases, intrinsics can be added in the source code to explicitly indicate where vectorization should be used. Intrinsics are language constructs that map directly to sequences of instructions on the underlying architecture. Because intrinsics are specific to a particular architecture, they should only be used if other approaches are not possible.

Intel® architecture supports a range of SIMD operation sets, starting with MMX™ technology (introduced in 1997), going through Intel® Streaming SIMD Extensions (Intel® SSE, Intel® SSE2, Intel® SSE3, Intel® SSE4.1, and Intel® SSE4.2) and Supplemental Streaming SIMD Extensions 3, and most recently with Intel® Advanced Vector Extensions, Intel® Advanced Extensions 2, and Intel® Advanced Extensions 512. SIMD operations are supported on both Intel® Core™ and Intel® Atom™ processors today.

Why create SIMD.js?

Over the years, improvements in compilers and managed runtimes have significantly reduced the performance gap between JavaScript and native apps. One limitation, though, is that JavaScript is a managed language and so does not have direct access to the hardware. The execution engine abstracts away the hardware details, preventing access to the SIMD operations.

JavaScript expressions map less closely to the architecture than C expressions and the JavaScript language is dynamically typed, both of which make it difficult for a compiler to automatically vectorize loops.

To enable JavaScript programmers to use SIMD, Intel has worked with Mozilla, Google, Microsoft, and ARM to bring SIMD.js to JavaScript. SIMD.js is a set of JavaScript extension APIs that expose the SIMD operations of the underlying architecture. This makes it possible for existing C/C++ programs with intrinsics to be easily translated (by hand) to JavaScript and run in the browser, or for new JavaScript code to be written using SIMD operations.

SIMD.js: A simple array addition example

SIMD.js has been designed to cover the common overlap between major infrastructures supporting SIMD instructions: Intel SSE2 and ARM Neon*. Architectures that do not support the SIMD operations are still supported by SIMD.js, but they will not be able to match the performance of a SIMD architecture.

Table 1 shows some example JavaScript code for adding two arrays, and Table 2 shows how the same can be achieved using SIMD.js. SIMD.js supports 128-bit vectors, and since we are operating on 32-bit integer values, we can perform four addition operations at the same time. This potentially accelerates the computation by a factor of four.

/* Set up the integer arrays A, B, C */

var A = new Int32Array(size);
var B = new Int32Array(size);
var C = new Int32Array(size);

for (i = 0; i < size; i++)
{
  C[i] = A[i] + B[i];
}

Table 1: JavaScript* scalar/sequential array addition.

/* Set up the integer arrays A, B, C */

var A = new Int32Array(size);
var B = new Int32Array(size);
var C = new Int32Array(size);

/* Note that i increases by 4 each iteration */

for (i = 0; i < size; i += 4)
{
  /* load vector of four integers from A, starting at index i, into variable x */
  var x = SIMD.Int32x4.load(A, i);

  /* load vector of four integers from B, starting at index i, into variable y */
  var y = SIMD.Int32x4.load(B, i);

  /* SIMD addition operation on vectors x and y */
  var z = SIMD.Int32x4.add(x, y);

  /* SIMD operation to store the vector z results in array C */
  SIMD.Int32x4.store(C, i, z);
}

Table 2: SIMD.js example of array addition.

SIMD.js availability and performance

SIMD.js is currently at stage 3 approval of TC39, and on track to be part of ECMAScript* 2017. As a result of a close collaboration between Intel engineers and Microsoft, SIMD.js is available on the Windows® 10 Microsoft Edge browser as an experimental feature for developers to try. It is also included in the ChakraCore open source version of the browser's engine.

Figures 1 and 2 show the impact of SIMD.js in the latest Microsoft Edge browser. The figures show that the number of meshes (or 3D images) that can be supported at 60 frames per second more than doubles from 63 to 133 when using SIMD.js. This demo, called Skinning SIMD.js, was handwritten in asm.js and simd.js. (Asm.js code is usually cross-compiled and rarely written by hand, but in this case, the module was simple enough to be handwritten).

Skinning SIMD Demo

Enabling SIMD.js

You can use SIMD.js in ChakraCore (which we have used for preparing this paper) or in the Microsoft Edge browser.

Enabling SIMD.js in Microsoft ChakraCore

ChakraCore is the core part of Chakra, the high-performance JavaScript engine that powers the Microsoft Edge browser and Windows applications written in HTML/CSS/JavaScript. ChakraCore also supports the JavaScript Runtime (JSRT) APIs, which allow you to easily embed ChakraCore in independent applications. ChakraCore is currently verified to be working on Windows platforms. For details on how to get, build, and use ChakraCore, seehttps://github.com/Microsoft/ChakraCore.

ChakraCore can run directly from the command line or as part of node.js. ChakraCore has the latest SIMD.js features and optimizations that are yet to be merged into Microsoft Edge. To run SIMD.js code from the command line, make sure to include the following flags “-simdjs -simd128typespec”.

Enabling SIMD.js in Microsoft Edge

Microsoft Edge ships as part of Windows 10. SIMD.js is available in the Microsoft Windows RedStone1 (RS1) update. For the latest features, install the latest version of the Windows 10 release. To enable SIMD.js in the Microsoft Edge browser:

  1. Navigate to “about:flags”.
  2. Under “JavaScript”, check “Enable experimental JavaScript features”.
  3. Restart the browser.

Writing performant SIMD.js code for ChakraCore

SIMD.js offers an extensive set of data types (see Table 3) and operations that are all geared towards boosting performance. There is a lot of flexibility in how the SIMD.js operations could be used. For that reason, JavaScript guarantees that every valid use will work, but does not guarantee the performance. The speed will depend on the implementation of both the JavaScript engine and your own code. In some cases, there could be hidden performance cliffs when using SIMD.js.

This section explains how SIMD.js is implemented in ChakraCore and offers some guidelines for writing SIMD.js code that will work with the Full JIT (just-In-time compiler) for optimal performance.

Supported data types in SIMD.js:

* Float32x4 (32-bit float x 4 data items)

* Int32x4 (32-bit integer x 4 data items)

* Int16x8 (16-bit integer x 8 data items)

* Int8x16 (8-bit integer x 16 data items)

* Uint32x4 (32-bit unsigned x 4 data items)

* Uint16x8 (16-bit unsigned x 8 data items)

* Uint8x16 (8-bit unsigned x 16 data items)

* Bool32x4 (32-bit bool x 4 data items)

* Bool16x8 (16-bit bool x 8 data items)

* Bool8x16 (8-bit bool x 16 data items)

Table 3: SIMD.js supports a range of data types.

Understanding the three versions of SIMD.js in ChakraCore

There are three versions of SIMD.js implemented in ChakraCore:

  • Runtime library. To improve start-up times, ChakraCore starts to interpret the code without compilation using the runtime library. The runtime library is also used by the ChakraCore interpreter and as a fallback when the Full JIT fails to optimize SIMD.js operations, whether that is due to ineffective or ambiguous code, or incompatible hardware. The runtime library is unoptimized, so it guarantees the code will execute but does not have any of the performance gain associated with using SIMD operations. Any performant code should spend as little time as possible in the library. The SIMD.js runtime library may offer a lower performance than code that does not use SIMD.js, so some developers may choose to offer a sequential version of their code for times when SIMD cannot be used.
  • Full JIT. The Full JIT is a type-specializing compiler that attempts to bridge the gap between the high-level data types used in JavaScript and the low-level data types used by the architecture. It will, for example, attempt to make numbers 32-bit integers (rather than JavaScript's default 64-bit floats) where possible to improve vectorization. Where this causes correctness issues, the implementation falls back to the runtime library. Writing efficient SIMD.js code that targets this compiler is the focus of this paper.
  • Asm.js. Asm.js is a strict subset of JavaScript that utilizes JavaScript syntax to distinguish between integer and floating point types and provides other rules that ensure highly effective ahead-of-time compilation, enabling near-native performance. Asm.js code is usually created by compiling from C/C++ using the Emscripten LLVM* compiler. SIMD.js is part of the Asm.js specification and is used either when translating SIMD intrinsics or if the translating compiler does auto-vectorization (something that Emscripten currently supports). Asm.js is not designed to be written by hand, and developers should write their apps in C/C++ and compile to JavaScript to use this implementation. The details of this are outside the scope of this paper. For more information see http://asmjs.org/spec/latest/ and http://kripken.github.io/mloc_emscripten_talk/.

Tip #1: Use SIMD for hot code

Chakra is a multi-tiered execution engine. To achieve a good start-up time, the code runs first through the unoptimized Runtime library. If the code runs a few times, the interpreter will recognize it is repeating code and run it through the Full JIT. Only the Full JIT will carry out SIMD.js optimizations that yield a performance boost. For that reason, it is advisable to only use SIMD.js for code in your applications that is repeated often or consumes a lot of processor time (hot code).

Avoid using SIMD.js in start-up code or loops/functions that don’t run for a long time; otherwise the Full JIT might not kick in, and you may see a degradation of performance compared to sequential code when your SIMD.js code runs in the Runtime library. Some performance testing and code tweaking might be required to achieve performance improvements.

One caveat for this tip: it is important to extract out initializations of SIMD constants into cold sections to avoid SIMD constants from being constructed over and over again.

Tip #2: Explicitly convert strings to numbers

The SIMD constructors and splat operations are type-generic, so they’ll accept both strings and numbers, as shown in Table 4.  However, if strings are found they will not be optimized as arguments for the SIMD APIs. As a result, execution will fall back to the Runtime library, causing a loss of performance.

To guarantee performance, always use arguments of Number or Bool types. If your code depends on non-Number types, introduce the conversion explicitly in your code, as shown in Table 5.

The majority of SIMD operations throw a TypeError on unexpected types, except for a few that expect any JavaScript type and coerce it to a Number or Bool.

/* Set up variables */
var s  = myString; // string
var n  = myVar     // could be Null

/* Set up f4 as vector of s,s,n,n */
var f4 = SIMD.Float32x4(s, s, n, n);

/* After splat, i4 = vector of s,s,s,s */
var i4 = SIMD.Int32x4.splat(s);

Table 4: SIMD constructor with arbitrary types.

var s  = Number(myString); // explicitly converted to number
var n  = Number(myVar); // explicitly converted to number

var f4 = SIMD.Float32x4(s, s, n, n);
var i4 = SIMD.Int32x4.splat(s);

Table 5: SIMD constructors with explicit coercions.

Tip #3: Optimize vector lane access

There are fast instructions that can be used to extract one of the data items (or lanes) from a vector. However, the Full JIT implementation is not able to handle variable index values in these commands, so execution will fall back to the Runtime library in that case. To avoid that situation, use integer literals for lane indices to remove the uncertainty so the compiler can optimize it. Tables 6 and 7 show how code should be rewritten to ensure Full JIT execution.

This guideline also applies to shuffle and swizzle commands, which can be used to rearrange the order of the lanes.

/* function accepts a vector of Int32x4 and puts it in v */
function SumAllLanes(v)
{
  var sum = 0;
  for (var i = 0; i < 4; i ++)
  {
    /* Extract lane I from vector v */
    sum += SIMD.Int32x4.extractLane(v, i);
  }
  return sum;
}

Table 6: Extract lane with variable lane index.

function SumAllLanes(v)
{
  var sum = 0;
  /* Use literals to extract each lane for Full JIT optimization */
  sum += SIMD.Int32x4.extractLane(v, 0);
  sum += SIMD.Int32x4.extractLane(v, 1);
  sum += SIMD.Int32x4.extractLane(v, 2);
  sum += SIMD.Int32x4.extractLane(v, 3);

  return sum;
}

Table 7: Extract lane with literals.

Tip #4: Consistently define variables

If the argument types for SIMD operations do not match or are ambiguous, the Full JIT will decide that a TypeError exception is likely to occur and will not type-specialize, passing execution back to the Runtime library.

Table 8 shows an example of this in practice. In this case, the programmer knows that condition1 == condition2 (if one is true, so is the other, always). The compiler can’t know or infer that, though. In the second if condition, the compiler will be conservative and assume that x could be a Number or Float32x4, because the previous condition creates that possibility. Passing a Number argument to a Float32x4.add operation will cause a TypeError, so the Full JIT will not optimize.

Additionally, y is not defined on every execution path, so in the second if condition, y could be Undefined or Float32x4. Again, that would cause the Full JIT to give up on optimizing.

To avoid these cases, follow these guidelines:

  1. Always define your variables on all execution paths. Never leave SIMD variables undefined. In Table 8, for example, where x is defined as 0, y should also be defined.
  2. Avoid assigning values of different types to a variable. In Table 8, defining x as a Float32x4 and as Number creates uncertainty that prevents optimization.
  3. If (2) is not possible, try to only mix valid SIMD types. For example, using Int32x4 and Float32x4 types would be okay, but don’t mix strings and SIMD types.
  4. If (2) and (3) are not possible, guard any SIMD code using ambiguous variables with a check operation. Table 9 shows an example of how to do that. The check operation will enable the compiler to confirm that a variable is of the expected type, so it can continue optimizing. If a variable is not of the right type, a TypeError is thrown and execution reverts to the Runtime library. Including the check ensures that the Full JIT can attempt optimization. Without it, the uncertainty would result in execution falling immediately back to the Runtime library. There may be a small overhead associated with using a check, but there is a drastic slowdown if code that could be optimized executes in the Runtime library instead.
function Compute()
{
  var x, y;
  if (condition1)
  {
     x = SIMD.Float32x4(1,2,3,4);
     y = SIMD.Float32x4(-1,-2,-3,-4);
  }
  else
  {
     x = 0;
  }

  /* developer knows that condition1 == condition2 always */
  if (condition2)
  {

    /* add vector x to itself. So (1,2,3,4) -> (2,4,6,8) */
    x = SIMD.Float32x4.add(x, x);
    y = SIMD.Float32x4.add(y, y);
  }
}

Table 8: Example of polymorphic variables.

function Compute()
{
  var x, y;
  if (condition1)
  {
     x = SIMD.Float32x4(1,2,3,4);
     y = SIMD.Float32x4(-1,-2,-3,-4);
  }
  else
  {
     x = 0;
  }

  /* developer knows that condition1 == condition2 always */
  if (condition2)
  {
    /* check x is Float32x4 */
    x = SIMD.Float32x4.check(x);

    /* add vector x to itself. So (1,2,3,4) -> (2,4,6,8) */
    x = SIMD.Float32x4.add(x, x);

    y = SIMD.Float32x4.check(y);
    y = SIMD.Float32x4.add(y, y);
  }
}

Table 9: Using check operations to ensure SIMD code optimization.

Conclusion

We hope that this paper helps you to produce better and faster code and showed you how SIMD operations can improve the performance of data-intensive work in JavaScript. Although the performance gain may vary depending on implementation, the coding patterns and SIMD use cases presented here will also apply to other browsers.

References

SIMD.js specification page: http://tc39.github.io/ecmascript_simd/

ChakraCore GitHub* Repo: https://github.com/Microsoft/ChakraCore

Node.js on ChakraCore: https://github.com/nodejs/node-chakracore

Skinning SIMD.js demo: http://huningxin.github.io/skinning_simd/

Why Should You Care About Machine Learning?

$
0
0

Machine learning has been around for a while, so even if you haven’t worked on it as a developer, you’re probably very familiar with it as a consumer. When you add something to your cart in Amazon, and see a list of other recommended products that you might also like—that's an example of machine learning. Essentially, machine learning is the development of computer programs that can learn and create their own rules, based on data.

Developing machine learning applications is different than developing standard applications. Instead of writing code that solves a specific problem, machine learning developers create algorithms that are able to take in data and then build their own logic based on that data. In the Amazon example, data about customer behavior and sales is used to determine which products you're most likely to also be interested in. It isn't looking at a 1:1 relationship between what's in your cart and another specific product—like something a marketer or sales person recommended selling together—instead it's taking into account all of the existing data, from all visits and all sales, and using that to predict behavior and determine recommendations that make sense. New products—and new data—are always being input, so the recommendation results are continuously adjusting and improving.

Why should you care about machine learning now? With the current increase in IoT and connected devices, we now have access to so much more data—and along with it, an increased need to manage and understand what we know.

Also, because so many different industries are starting to rely on machine learning, you have a great opportunity as a developer to learn how it works and how it might bring value to your product.
 

Types of Machine Learning Algorithms

There are four main types of machine learning:

Supervised– The training data consist of labeled inputs and known outcomes, which the machine studies until it can apply the label on its own. For example, to create a face detection algorithm, you might provide images of landscapes, people, animals, buildings, and so on, with their respective labels until the machine could reliably recognize a face in an unlabeled image.

Unsupervised– The machine analyzes unlabeled data and categorizes it based on similarities it has identified. So, you might provide the same photos as in the above example, but without their labels. The machine would still be able to cluster images based on shared characteristics (the sharp lines of a cityscape vs. the round shape of a face, for example)—but it would not be able to say that that round shape is a “face.” These programs are used to identify groupings within data sets that may be difficult or impossible for a human to see.

Semi-supervised – A combination of the above, used when there is a large amount of data but only some of it is labeled. Unsupervised learning techniques might be used to group and cluster the unlabeled data, while supervised learning techniques can be used to predict labels for it.

Reinforcement learning– Uses simple reward data to train the machine on ideal behavior within a specific context.
 

Faster than We Can Do by Hand

The biggest advantage to machine learning is that it allows us to do things much more quickly than we'd be able to do otherwise. It can't solve problems that a human being couldn't also solve, but it can take in a huge amount of data and very quickly build connections and predictions based on it. That becomes even more important as we continue to expand the amount of data we're generating through IoT and connected devices. Think of a smart outlet or a step counter—or really, anything in your life that generates data—and then think about how much data it’s able to generate on a daily basis. And then multiply that by every person who owns that product. The more connected we are, the more information there is; machine learning allows us to identify important patterns and insight, at a speed that humans simply can’t.
 

What’s the Market?

Any industry with access to data can benefit from a greater understanding about what that data means—whether that’s a manufacturing plant trying to anticipate repairs, or the makers of a driverless car. Here’s how some industries are using machine learning:


 

Hello, My Name Is Chatbot: Current Trend

This year, Facebook Messenger launched with chatbots, making it possible for companies and consumers to engage using bots. Essentially, this means that when a customer visits your Facebook page, they can hit Message as if sending a direct message, and interact right away with AI that can help them make decisions and learn about products. With each interaction, the chatbot improves. Specific transactions can also take place directly—click on the car icon in Messenger to request a ride from Uber, for example.

These chatbots not only send text, but also images and call-to-action buttons—which means that they can handle automated customer service, e-commerce assistance, and even content. As accuracy continues to improve, this begins to look a lot like an automated concierge, allowing consumers to quickly and easily get the information and service they’re looking for. This is part of a bigger trend, sometimes called “conversational commerce," which taps into the popularity of mobile messaging apps and the increasing power of AI—where the future of shopping happens in a chat window.
 

A Few Places to Get You Started

One of the best ways to learn more about machine learning is to look for groups in your area. There are also a lot of resources online. Here are some links to get you started:

Machine learning is a huge topic, with a rich history, and a lot of things for you to consider. It’s also a topic that we’re very interested in, so check back here for more exploration of this topic in the future.

IoT Path-To-Product: The Making of a Connected Transportation Solution

$
0
0

To demonstrate a rapid path-to-product edge IoT solution for the transportation sector, a proof of concept was created using the Grove* IoT Commercial Developer Kit. That prototype was scaled to an industrial solution using an Intel® IoT Gateway, industrial sensors, and Intel® System Studio. This solution monitors the temperature within a truck’s refrigerated cargo area, as well the open or closed status of the cargo doors. The gateway generates events based on changes to those statuses, to support end-user functionality on a tablet PC application.

Figure 1. The finished product demonstration with custom trailer housing.

 

The core opportunity associated with the Internet of Things (IoT) lies in adding intelligence and connectivity to everyday devices, harnessing information and putting it to use in ways that add value. Monitoring the status of a refrigerated semi-truck trailer hauling perishable goods is a simple example. Alerting the driver when the temperature passes outside a pre-set range or when cargo doors are opened unexpectedly can help avoid financial losses. An IoT solution to monitor and track these aspects of a semi-truck trailer could therefore be a viable commercial product.

Intel undertook a development project to investigate this and other opportunities associated with building a connected transportation solution. The project was presented as a demonstration at Intel® Developer Forum in 2015 and again in 2016. This document recounts the course of the project development effort, to help drive inquiry, invention, and innovation for the Internet of Things.

For a how to for this project, see IoT Path-to-Product: How to Build a Connected Transportation Solution.

Visit GitHub for this project's latest code samples and documentation.

Introduction

The goal of this project was to build a functional prototype and then to transition that proof of concept into an industrial-grade solution for scalable deployment as a commercial product. Rapid prototyping is facilitated by using the Grove IoT Commercial Developer Kit, which consists of an Intel® NUC system, Intel® IoT Gateway Software Suite, and sensors and their components from the Grove Starter Kit Plus (manufactured by Seeed). The project also uses the Arduino* 101 board. Hardware used in the prototype stage of this project is illustrated in Figure 1, and specifications are given in Table 1.

Note: Known in the United States as “Arduino 101,” this board is known elsewhere as “Genuino* 101.” It is referred to throughout the rest of this document as the “Arduino 101” board.

Table 1. Prototype hardware used in connected transportation project

 

Intel® NUC Kit
DE3815TYKHE

Arduino* 101
Board

Processor/
Microcontroller

Intel® Atom™ Processor E3815 (512K Cache, 1.46 GHz)

Intel® Curie™ Compute Module @ 32 MHz

Memory

8 GB DDR3L-1066 SODIMM (max)

  • 196 KB Flash Memory
  • 24 KB SRAM

Networking / IO

Integrated 10/100/1000 LAN

  • 14 Digital I/O Pins
  • 6 Analog IO Pins

Dimensions

190 mm x 116 mm x 40 mm

68.6 mm x 53.4 mm

Full Specs

specs

specs

Figure 2. Intel® NUC Kit DE3815TYKHE and Arduino* 101 board.

The course of this project demonstrates the value of the path-to-product approach: it allows a prototype to be built with a relatively small investment of time and effort, followed by an efficient transition to a commercially viable solution. Using a precompiled OS as well as RPMs helps to eliminate unnecessary downloads, having to customize the OS, and identifying libraries necessary to bring a project to life.

This project was devised to contribute to innovations around solutions for similar use cases being produced and marketed. While this project was designed to provide only basic functionality, its design is flexible and extensible enough that a variety of features could be added. In particular, the project could be expanded in the future to include web connectivity, cloud capabilities, remote monitoring, and other components.

In the project’s earliest stages, the team listed potential features for the prototype and the product. A sample of these included rear-door status (open or closed), temperature of the trailer, alarms based on the state of the door and temperature, an online application to view data, and in-cab monitoring of information. To demonstrate the viability of creating a robust solution while maintaining simplicity and low cost, the team elected to limit the bill of materials for the prototype phase to just the contents of the Grove IoT Commercial Developer Kit.

Creating the Prototype Proof of Concept

To allow for separation of duties and efficient progress, the team divided the solution into three primary areas of effort:

  • User interface (UI). Part of the team began working on the actual production UI layout and design, looking ahead to the production stages of the project.
  • Application business logic. Part of the team began working on the logic for the prototype application, while also recognizing that changes would be needed as the project progressed toward the commercial solution.
  • Prototype sensor solution. Part of the team began to create the configuration of sensors for the solution, utilizing the UPM/MRAA libraries for rapid development.

This approach of separating the project into discrete segments allowed the team to progress through the prototype phase more rapidly than otherwise, taking best advantages of skill sets available within the team. In particular, while the user interface was not strictly required in the early phases of the project, it was expected to require the most development time of the three areas listed above. Therefore, so beginning it as early as possible allowed for it to be well underway by the time it was needed later in the project.

In terms of the application logic, the team was able to look ahead to the expected final functional prototype and make decisions in the early prototype process looking to the future. Overall, the team expected the operation of the door sensor to be relatively simple, allowing greater attention to the proper utilization of a temperature sensor on a small and then commercial scale.

By utilizing the sensors in the Grove Starter Kit Plus, we were able to rapidly create a prototype with a functional sensor environment that the UI team could work with. This approach enabled layout and design elements to come to life quickly and provided a future framework for the final functional use case. The prototype configuration, with the Intel NUC, Arduino 101 board, and sensors, is illustrated in Figure 3. The bill of materials is given in Table 2.

Figure 3. Developer kit with selected sensors enabled.

Table 2. Connected transportation prototype components.

 

Component

Details

Base System

Intel® NUC Kit DE3815TYKHE

http://www.intel.com/content/www/us/en/support/boards-and-kits/intel-nuc-kits/intel-nuc-kit-de3815tykhe.html

Arduino* 101 Board

https://www.arduino.cc/en/Main/ArduinoBoard101

USB Type A to Type B Cable

For connecting Arduino 101 board to NUC

Components from Grove* IoT Commercial Developer Kit

Base Shield V2

http://www.seeedstudio.com/depot/Base-Shield-V2-p-1378.html

Touch Sensor Module

http://www.seeedstudio.com/depot/Grove-Touch-Sensor-p-747.html

Button Module

http://www.seeedstudio.com/depot/Grove-Button-p-766.html

Temperature Sensor Module

http://www.seeedstudio.com/depot/Grove-Temperature-Sensor-p-774.html

Buzzer Module

http://www.seeedstudio.com/depot/Grove-Buzzer-p-768.html

Red LED

http://www.seeedstudio.com/depot/Grove-Red-LED-p-1142.html

LCD with RGB Backlight Module

http://www.seeedstudio.com/depot/Grove-LCD-RGB-Backlight-p-1643.html

Use Case

The use case was built and displayed through an administration application to support the following scenario:

 
  1. Press button to start the use case (simulating opening the door):

    a.    Sets threshold ambient temperature +5 degrees.
    b.    Solid red LED lights up in cab.
    c.    LCD displays current temperature and door status (open), as shown in Figure 4.

    Figure 4. Showing door status.

     

  2.  Touch temperature sensor to raise ambient room temperature by five degrees:

    a.    Buzzer sounds.
    b.    Red LED blinks continuously.
    c.    LCD turns red and displays actual temperature and door status (open), as shown in Figure 5.

    Figure 5. Showing high temperature status.

     

  3. Touch sensor to acknowledge alert (buzzer turns off).
  4. Press button to close the door:

    a.    Red LED continues to blink until temperature passes below threshold.
    b.    LCD displays temperature and door status (closed).
    c.    When temperature passes below threshold, blinking red LED turns off, solid green LED lights up, and LCD turns green.
    d.    LCD displays temperature and door status (closed).

Simulation

This simulation demonstrates the reduced potential loss of temperature-sensitive cargo by monitoring temperature changes and alerting the driver if it becomes critical, as illustrated in Figures 6 and 7.

Figure 6. Log file showing events.
Figure 7. Base online view as envisioned.

Target Commercial Solution

With an operational prototype based on the Intel IoT Developer Kit and Grove IoT Commercial Developer Kit, it was necessary to determine how to proceed to create a commercial solution. Table 3 outlines how the components used in the prototype phase could be transitioned to a production solution.

Table 3. Components in prototype versus production solution.

 

Prototype

Production Solution

Buzzer

Grove Kit Buzzer

Alarm on Phone (customer application)

LCD

Grove LCD panel

Screen on Phone (customer application)

LED (RED)

Grove Kit LED

Light on Phone (customer application)

Button

Grove Kit Button

Industrial magnetic sensor with paired magnet

Touch Sensor

Grove Kit Touch Sensor

Touch on Phone (customer application)

Temp Sensor

Grove Kit Temp Sensor

Commercial Temp Sensor

Heat Source

Person’s Finger

20-watt Halogen Puck Light

Gateway

Intel® NUC and Arduino 101 Board

Intel® IoT Gateway

In addition, there are many commercial gateways available, with design differences making them suited to various industries and use cases. A key consideration for this project was a broad range of I/O options, to support both current and future functionality, specifically for connecting sensors to provide a data feed.

An Intel® IoT Gateway was chosen as the gateway device for the product portion of this project, as shown in Figure 8. The processing power and I/O functions were deemed sufficient for the presented commercial usage.

A wired Modbus temperature sensor was chosen to provide a reliable connection to obtain temperature readings every several seconds. All communications on devices were performed via direct wiring or via Ethernet. Standard MRAA/UPM libraries were maintained throughout the process without any modifications.

The gateway acts as a web server, storing data as well as making calls to the temperature sensor to keep the data fresh.  The Java UPM Library uses libmodbus to read and send periodic updates from the Comet* temperature sensor to the Tomcat* web server.

Figure 8. Gateway installed as part of demo with temperature sensor.

Transferring Code to the Gateway

Typically, ramping up to a commercial gateway involves having to revamp code so that it is compatible with whichever services are available on the system. In this case, the coding on the prototype was all performed in Java*, HTML, and JavaScript*, making the transition to a commercial solution relatively simple. The code transition was simplified by the use of the same MRAA/UPM libraries in both phases of the project.

Mapping Grove Sensors to Industrial Sensors

Using MRAA and UPM libraries can help jumpstart a project. The following steps cover porting the app to the commercial product solution:

  1. Target desired industrial hardware:

    a.    Determine whether the hardware requires additional libraries or application support.
    b.    If needed, integrate libraries and software and create OS layers for software deployment.

  2. Once commercial product hardware is successfully integrated into the prototype solution, remove the code that is no longer needed:

    a.    Utilize existing layers created during the prototype phase to install solution dependencies.
    b.    Make changes as needed for new hardware.

  3. Take new and old layers and build into production runtime.
  4. Complete all installation and testing on production hardware.

Customer Application

The base customer application, shown in Figures 9 through 12, was created to replace the functionality of the Grove LCD, LED, buzzer, and touch sensor that the driver would interact with. In the production solution, the customer application would reside on the mobile device carried by the driver, allowing for easy notification and response to alerts. The customer application is quite simple in this example but could be easily expanded. It has two status indicators that refer to temperature and door status. An alert button becomes active and then gives an acknowledge button to clear the alert.

Figure 9. Main status screen.
Figure 10. Status showing an alert.
Figure 11. Showing a full alert and acknowledge button active.
Figure 12. Initial setup screen finding IP address of gateway.

Conclusion

This exercise demonstrates use of the Grove IoT Commercial Developer Kit to rapidly develop a prototype. With wide-ranging libraries, the ease of use of the Developer Kit simplifies the development process while also providing high compatibility for commercialization of the product. Scaling up to a commercial gateway was quite easy, as the team was able to directly copy code and have it function immediately.

More Information

IoT Path-To-Product: How to Build a Connected Transportation Solution

$
0
0

This Internet of Things (IoT) path-to-product project is part of a series that portrays how to develop a commercial IoT solution from the initial idea stage, through prototyping and refinement to create a viable product. It uses the Grove* IoT Commercial Developer Kit, with the prototype built on an Intel® Next Unit of Computing (Intel® NUC) Kit DE3815TYKHE small-form-factor PC and Arduino* 101 board.

This document demonstrates how to build a prototype and utilize these same technologies in deploying an Intel® IoT Gateway and industrial sensors. It does not require special equipment or deep expertise, and as such, it is intended to be instructive toward developing IoT projects in general.

Note: Known in the US as “Arduino* 101,” this board is known elsewhere as “Genuino* 101.” It is referred to throughout the rest of this document as the “Arduino 101” board.

Building the Prototype

From this exercise, developers will learn to do the following:

  • Connect to the Intel® NUC Kit DE3815TYKHE.
  • Interface with the I/O and sensor repository for the Intel® NUC using MRAA and UPM from the Intel® IoT Developer Kit, a complete hardware and software solution to help developers explore IoT and implement innovative projects.
  • Run the Java* code sample in Intel® System Studio IoT Edition, an IDE for creating applications that interact with sensors and actuators, enabling a quick start for developing software for Intel® IoT platforms.

To see a making of for this project, see IoT Path-to-Product: The Making of Building a Connected Transportation Solution.

Visit GitHub for this project's latest code samples and documentation.

What it Does

This project simulates the following parts of a transportation monitoring solution:

  • Door. The door can be closed or opened, in which case the driver is signaled that something might be wrong.
  • Temperature. The temperature inside the truck is monitored. The data is logged, and above a certain threshold, an alarm is raised.
  • Alarm. Under certain conditions, an alarm is raised. The alarm can be canceled by pressing the touch button or when the parameters of the system return to normal.
  • Display. Displays the status of the system, temperature, and door status.

How it Works

This connected-transportation application operates based on the following sensor data:

  • Open/closed status of the truck door
  • Temperature of the truck interior
  • Events: open/close door, change temperature, set temperature threshold, trigger/stop alarm

All data is forwarded to the web interface, which can be used to monitor the status of the truck.

Set up the Intel® NUC Kit DE3815TYKHE


This section gives instructions for installing the Intel® IoT Gateway Software Suite on the Intel NUC.

Note: Due to the limited size of the local storage drive, we recommend against setting a recovery partition. You can return to the factory image by using the USB drive again. You can now use your gateway remotely from your development machine if you are on the same network as the gateway. If you would like to use the Intel® IoT Gateway Developer Hub instead of the command line, enter the IP address into your browser and go through the first- time setup.

Note: If you are on an Intel network, you need to set up a proxy server.

  1. Create an account on the Intel® IoT Platform Marketplace if you do not already have one.
  2. Download the Intel® IoT Gateway Software Suite and follow the instructions received by e-mail to download the image file.
  3. Unzip the archive and write the .img file to a 4 GB USB drive:

    On Microsoft Windows*, you can use a tool like Win32 Disk Imager*: https://sourceforge.net/projects/win32diskimager.
    On Linux*, use sudo dd if=GatewayOS.img of=/dev/ sdX bs=4M; sync, where sdX is your USB drive.

  4. Unplug the USB drive from your system and plug it into the Intel NUC along with a monitor, keyboard, and power cable.
  5. Turn on the Intel® NUC and enter the BIOS by pressing F2 at boot time.
  6. Boot from the USB drive:

    a. From the Advanced menu, select Boot.
    b. From Boot Configuration, under OS Selection, select Linux.
    c. Under Boot Devices, make sure the USB check box is selected.
    d. Save the changes and reboot.
    e. Press F10 to enter the boot selection menu and select the USB drive.

  7. Log into the system with root:root.
  8. Install Wind River Linux on local storage:
    ~# deploytool d /dev/mmcblk0 lvm 0 resetmedia –F
  9. Use the poweroff command to shut down your gateway, unplug the USB drive, and turn your gateway back on to boot from the local storage device.
  10. Plug in an Ethernet cable and use the ifconfig eth0 command to find the IP address assigned to your gateway (assuming you have a proper network setup).
  11. Use the Intel® IoT Gateway Developer Hub to update the MRAA and UPM repositories to the latest versions from the official repository (https://01.org). You can achieve the same result by entering the following commands:
     ~# smart update
     ~# smart upgrade
     ~# smart install upm
  12. Plug in an Arduino* 101 board and reboot the Intel® NUC. The Firmata* sketch is flashed onto Arduino* 101, and you are now ready to use MRAA and UPM with it.

Set up the Arduino* 101 Board

Setup instructions for the Arduino* 101 board are available at https://www.arduino.cc/en/Guide/Arduino101

Connect other Components

This section covers making the connections from the Intel® NUC to the rest of the hardware components. The bill of materials for the prototype is summarized in Table 1, and the assembly of those components is illustrated in Figure 1.

Table 1. Connected transportation prototype components.

 

Component

Details

Base System

Intel® NUC Kit DE3815TYKHE

 

Arduino* 101 Board

Sensor hub

USB Type A to Type B Cable

For connecting Arduino* 101 board to NUC

Components from Grove* IoT Commercial Developer Kit

Base Shield V2

 

Touch Sensor Module

Alarm mute

Button Module

Door toggle

Temperature Sensor Module

Monitors temperature

Buzzer Module

Alarm

Red LED

Alarm status light

LCD with RGB Backlight Module

Status display

Figure 1. Connected transportation proof of concept prototype.

How to Build the Product

From this exercise, developers will learn how to do the following:

  • Connect to the Dell iSeries Wyse* 3290 IoT Gateway.
  • Interface with the I/O and sensor repository using MRAA and UPM from the Intel® IoT Developer Kit, a complete hardware and software solution to help developers explore IoT and implement innovative projects.
  • Run the code sample in Intel® System Studio IoT Edition, an IDE for creating applications that interact with sensors and actuators, enabling a quick start for developing software for Intel® IoT Platforms.

Visit GitHub for this project's latest code samples and documentation.

What it Does

This project simulates the following parts of a transportation monitoring solution:

  • Door. The door can be closed or opened, in which case the driver is signaled that something might be wrong.
  • Temperature. The temperature inside the truck is being monitored. The data is logged and above a certain threshold an alarm is raised.
  • Alarm. Under certain conditions, an alarm is raised. The alarm status can be monitored and canceled through the customer application.
  • Display. Displays the status of the truck on the customer application.

How it Works

This transportation application operates based on the following sensor data:

  • Open/closed status of the truck door
  • Temperature of the truck interior
  • Events: open/close door, change temperature, set temperature threshold, trigger/stop alarm

All data is forwarded to the admin application, which can be used to monitor the status of the truck.

Set up the Dell iSeries Wyse* 3290 IoT Gateway

This section gives instructions for installing the Intel® IoT Gateway Software Suite on the Dell Wyse* 3290.

Note: If you are on an Intel network, you need to set up a proxy server.

  1. Create an account on the Intel® IoT Platform Marketplace if you do not already have one.
  2. Order the Intel® IoT Gateway Software Suite, and then follow the instructions you will receive by email to download the image file.
  3. Unzip the archive, and then write the .img file to a 4 GB USB drive:

    •    On Microsoft Windows, you can use a tool like Win32 Disk Imager: https://sourceforge.net/projects/win32diskimager
    •    On Linux, use sudo dd if=GatewayOS.img of=/dev/ sdX bs=4M; sync, where sdX is your USB drive.

  4. Unplug the USB drive from your system, and then plug it into the Dell Wyse* 3290 along with a monitor, keyboard, and power cable.
  5. Turn on the Dell Wyse* 3290, and then enter the BIOS by pressing F2 at boot time.
  6. Boot from the USB drive:

    a.    On the Advanced tab, make sure Boot from USB is enabled.
    b.    On the Boot tab, put the USB drive first in the order of the boot devices.
    c.    Save the changes, and then reboot the system.

  7. Log in to the system with root:root.
  8. Install Wind River* Linux on local storage: 
    ~# deploytool d /dev/mmcblk0 lvm 0 reset¬media -F
  9. Use the poweroff command to shut down your gateway, unplug the USB drive, and then turn your gateway back on to boot from the local storage device.
  10. Plug in an Ethernet cable, and then use the ifconfig eth0 command to find the IP address assigned to your gateway (assuming you have a proper network setup).
  11. Use the Intel® IoT Gateway Developer Hub to update the MRAA and UPM repositories to the latest versions from the official repository (https://01.org). You can achieve the same result by entering the following commands:
    ~# smart update
    ~# smart upgrade
    ~# smart install upm
  12. Connect the FTDI* UMFT4222EV expansion board through an USB cable.
  13. Connect the Comet* T3311 Temperature sensor to the serial port.

Connect other Components

This section covers making the connections from the Dell Wyse* 3290 to the rest of the hardware components. The bill of materials for the product version of the connected transportation project is summarized in Table 2, and the assembly of those components is shown in Figure 2.

Table 2. Transportation product components.

 

Component

Details

Base System

Dell iSeries Wyse* 3290 IoT Gateway

 

FTDI UMFT4222EV

 

USB Type A to Type Micro-B Cable

For connecting UMFT4222EV board to Gateway

Sensors and other Components

Comet T3311

Temperature sensor

Grove* - SPDT Relay(30A)

Fan/light control

Magnetic Switch

Door sensor

10uF Capacitor (Optional)

 

5V DC Lightbulb

 

5V DC Fan

 

Figure 2. Assembled connected-transportation product.

 

How to Set up the Program

  1. To begin, clone the Path to Product repository with Git* on your computer as follows:
    $ git clone https://github.com/intel-iot-devkit/path-to-product.git
  2. Alternatively, the source can be downloaded from https://github.com/intel-iot-devkit/path-to-product. Once the .zip file is downloaded, uncompress it, and then use the files in the directory for this example.

Adding the Program to Intel® System Studio IoT Edition

Note: The following screenshots are from the Alarm clock sample; however the technique for adding the program is the same, just with different source files and jars.

  1. Open Intel® System Studio IoT Edition. It will start by asking for a workspace directory. Choose one and then click OK.
  2. In Intel® System Studio IoT Edition, select File -> new -> Intel(R) IoT Java Project:
  3. Give the project the name “Transportation Demo” and then click Next.
  4. You now need to connect to your Intel® NUC from your computer to send code to it. Choose a name for the connection and enter IP address of the Intel® NUC in the "Target Name" field. You can also try to Search for it using the "Search Target" button. Click Finish when you are done.    
  5. You have successfully created an empty project. You now need to copy the source files and the config file to the project. Drag all of the files from your git repository's “src” folder into the new project's src folder in Intel® System Studio IoT Edition. Make sure previously auto-generated main class is overridden.
    The project uses the following external jars: commons-cli-1.3.1.jar, tomcat-embed-core.jar, tomcat-embed-logging-juli. These can be found in the Maven Central Repository. Create a “jars” folder in the project's root directory, and copy all needed jars in this folder. In Intel® System Studio IoT Edition, select all jar files in “jars” folder and right click -> Build path -> Add to build path.
  6. Now you need to add the UPM jar files relevant to this specific sample. right click on the project's root -> Build path -> Configure build path. Java Build Path -> 'Libraries' tab -> click on “add external JARs...”
    For this sample, you will need the following jars:

    •    upm_buzzer.jar
    •    upm_grove.jar
    •    upm_i2clcd.jar
    •    upm_t3311.jar
    •    upm_ttp223.jar
    •    mraa.jar

  7. The jars can be found at the IOT Devkit installation root path\iss-iot-win\devkit-x86\sysroots\i586-poky-linux\usr\lib\java.
  8. Afterwards, copy the www folder to the home directory on the target platform using scp or WinSCP. Create a new Run configuration in Eclipse for the project for the Java Application. Set the Main Class as:com.intel.pathtoproduct.JavaONEDemoMulti in the Main tab. Then, in the arguments tab:
  • For the devkit version (with Intel® NUC): -config devkit -webapp <path/to/www/folder> -firmata
  • For the commercial version (with Dell Wyse* 3290): -config commercial -webapp <path/to/www/folder>

Running without an IDE

Download the repo directly to the target plaform and run the start_devkit.sh or start_commercial.sh scripts.

Conclusion

As this how-to document demonstrates, IoT developers can build prototypes at relatively low cost and without specialized skill sets. Using the Grove* IoT Commercial Developer Kit and an Arduino* 101 board, project teams can conduct rapid prototyping to test the viability of IoT concepts as part of the larger path-to-product process.

More Information


Distributed Training of Deep Networks on Amazon Web Services* (AWS)

$
0
0

Download Document

Ravi Panchumarthy (Intel), Thomas “Elvis” Jones (AWS), Andres Rodriguez (Intel), Joseph Spisak (Intel)

Deep neural networks are capable of amazing levels of representation power resulting in state-of-the-art accuracy in areas such as computer vision, speech recognition, natural language processing, and various data analytic domains. Deep networks require large amounts of computation to train, and the time to train is often days or weeks. Intel is optimizing popular frameworks such as Caffe*, TensorFlow*, Theano*, and others to significantly improve performance and reduce the overall time to train on a single node. In addition, Intel is adding or enhancing multi. node distributed training capabilities to these frameworks to share the computational requirements across multiple nodes and further reduce time to train. A workload that previously required days can now be trained in a matter of hours. Read more about this.

Amazon Web Services* (AWS) Virtual Private Cloud (VPC) provides a great environment to facilitate multinode distributed deep network training. AWS and Intel partnered to create a simple set of scripts for creating clusters that allows developers to easily deploy and train deep networks, leveraging the scale of AWS. In this article, we provide the steps to set up the AWS CloudFormation* environment to train deep networks using the Caffe network.

AWS CloudFormation Setup

The following steps create a VPC that has an Elastic Compute Cloud (EC2) t2.micro instance as the AWS CloudFormation cluster (cfncluster) controller. The cfncluster controller is then used to create a cluster composed of a master EC2 instance and a number of compute EC2 instances within the VPC.

Steps to deploy the Cloudformation and cfncluster

  1. Use the AWS Management Console to launch the AWS CloudFormation (Figure 1).


    Figure 1. CloudFormation in Amazon Web Services

  2. Click Create Stack.
  3. In the section labeled, Choose a template (Figure 2), select Specify an Amazon S3 template URL, and then enter https://s3.amazonaws.com/caffecfncluster/1.0/intelcaffe_cfncluster.template. Click Next.


    Figure 2. Entering the template URL.

  4. Give the Stack a name, such as myFirstStack. UnderSelect a key pair, find the key pair you just named (follow these instructions if you need to create a key pair). Leave the rest of the Parameters as they are. Click Next.
  5. Enter a Key, for example, name, and a Value, such as, cfnclustercaffe.
    Note that you can give any names to the key and value. The name does not have to match the key-pair from the previous step.
  6. Click Next.
  7. Review the stack, check the acknowledgement box, and then click Create. Creating the stacks will take a few minutes. Wait until the status of all three created stacks is CREATE_COMPLETE.
  8. The template used in Step 3 calls two other nested templates, creating a VPC with an EC2 t2.micro instance (Figure 3). Select the stack with the EC2 instance, and then select Resources. Click the Physical ID of the cfnclusterMaster.


    Figure 3. Selecting the Physical ID from the Resources tab.

  9. This will take you to AWS EC2 console (Figure 4). Under Description, note the VPC ID and the Subnet ID as you’ll need them in a later step. Right-click on the instance, select Connect and follow the instructions.


    Figure 4. AWS EC2 console.

  10. Once you ssh into the instance, prepare to modify the cluster’s configuration with the following commands:

    cd .cfncluster
    cp config.edit_this_cfncluster_config config
    vi config

  11. Follow the comments in the config file (opened with the final command in Step 9) to fill in the appropriate information.

    Note that while the master node is not labelled as a compute node, it also acts as a compute node. Therefore, if the total number of nodes to be used in training is 32, then choose a queue_size = 31 compute nodes.

    • Use the VPC ID and Subnet ID obtained in Step 8.
    • The latest custom_ami to use should be ami-77aa6117; this article will be updated when newer AMI are provided.
  12. Launch a cluster with the command: cfncluster create <vpc_name_choosen_in_config_file>. This will launch more AWS CloudFormation templates. You can see them via the AWS CloudFormation page in the AWS Management Console.

Sample Scripts to Train a Few Popular Networks

After the cloud-formation-setup is complete, if you configured the size of the cluster to be N, there will be N+1 instances created (1 master node and N compute nodes). Note that the master node is also treated as a compute node. The created cluster has a shared drive among all N+1 instances. The instances contain intelcaffe, Intel® Math Kernel Library (Intel® MKL), and sample scripts to train CIFAR-10 and GoogLeNet.

To start training a sample network, log in to the master node and configure the scripts provided: CIFAR-10 (~/scripts/aws_ic_mn_run_cifar.sh) and GoogLeNet (~/scripts/aws_ic_mn_run_googlenet.sh). Both the scripts have the following variables which need to be edited before running.

#Set stackname_tag to VPC name prefixed with cfncluster-. For example: cfncluster-myvpc-name. VPC name is the same as the value for vpc_settings.
	stackname_tag=cfncluster-
	num_instances=
	aws_region=us-west-2

There are few other configurable variables for more customization in both ~/scripts/aws_ic_mn_run_cifar.sh and ~/scripts/aws_ic_mn_run_googlenet.sh

To run CIFAR-10 training, after editing the above mentioned variables in the script, run:

cd ~/scripts/
./aws_ic_mn_run_cifar.sh

To run GoogLeNet training, after editing the above mentioned variables in the script, run:

cd ~/scripts/
./aws_ic_mn_run_googlenet.sh

The script aws_ic_mn_run_cifar.sh creates a hosts file (~/hosts.aws) by querying and retrieving the instances information based on stackname_tag variable. It then updates the solver and train_val prototxt files. The script will start the data server, which will provide data to the compute nodes. There will be a little overhead on the master with data server running along with the compute. After the data server is launched, the distributed training is launched using mpirun command.

The script aws_ic_mn_run_googlenet.sh creates a hosts file (~/hosts.aws) by querying and retrieving the instances information based on stackname_tag variable. Unlike the CIFAR-10 example where the data server provides the data, in GoogLeNet training each worker will read its own data. The script will create a separate solver, train_val prototxt files, and train.txt files for each worker. The script will then launch the job using the mpirun command.

Notices

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. © Intel Corporation.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com.

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.

For more information go to http://www.intel.com/performance.

OpenGL* Performance Tips: Avoid OpenGL Calls that Synchronize CPU and GPU

$
0
0

Download PDF

Download Code Sample

Introduction

To get the highest level of performance from OpenGL* you want to avoid calls that force synchronization between the CPU and the GPU. This article covers several of those calls and describes ways to avoid using them. It is accompanied by a C++ example application that shows the effect of some of these calls on rendering performance. While this article refers to graphical game development, the concepts apply to all applications that use OpenGL 4.3 and higher. The sample code is written in C++ and is designed for Windows* 8.1 and Windows® 10 devices.

Requirements

The following are required to build and run the example application:

  • A computer with a 6th generation Intel® Core™ processor (code-named Skylake)
  • OpenGL 4.3 or higher
  • Microsoft Visual Studio* 2013 or newer

Avoid OpenGL Calls that Synchronize CPU and GPU

OpenGL contains a variety of calls that force synchronization between the CPU and the GPU. These are called Sync Objects and are designed to synchronize the activity between the GPU and the application. Unfortunately this hurts overall performance because the CPU stalls until the GPU has completed its action. If possible, avoid these calls.

The OpenGL Foundation’s website describes Sync Objects at https://www.opengl.org/wiki/Sync_Object, but here is a summary of ways to avoid this issue:

  • Avoid glReadPixels() or glFinish(), which force synchronization between the CPU and GPU. If you need to use glReadPixels() do so in conjunction with Pixel Buffer Objects.
  • Use glFlush() with caution; if you must synchronize between contexts use Sync Objects instead.
  • Avoid updating resources that are used by the GPU. It is better to create static resources when the application starts and not have to modify them later. Whenever possible, create vertex buffer objects as static (GL_STATIC_DRAW).
  • Avoid updating resources that are used by the GPU. For example, do not call glBufferSubData/glTexImage if there are queued commands that access a given VBO/texture. Limit the chances of simultaneous read/write access to resources.
  • Use immutable versions of API calls to create buffers and textures. For example, use API calls like glBufferStore() and glTexStorage*().
  • Update buffers and avoid GPU/CPU synchronization issues by creating a pool of bigger buffers with glBufferStorage() and permanently map them with the glMapBuffer() function call. The application can then iterate over individual buffers with increasing offsets, providing new chunks of data.
  • Use glBindBufferRange() for uniform buffer objects to bind new chunks of data at the current offset. For vertex buffer objects access newly copied chunks of data with firstIndex (for glDrawArrays) or indices/baseVertex parameters (for glDrawElements/BaseVertex). Increase the initial number of pools if the oldest buffer submitted for GPU consumption is still in use. Monitor the progress of the GPU by accessing the data from the buffers with Sync Objects.

The example application demonstrates the effects of three different OpenGL calls that cause the CPU and GPU to synchronize. The calls are glReadPixels, glFlush, and glFinish. These calls are compared to a non-synchronized performance. The current performance for each approach is displayed in a console window in milliseconds-per-frame and number of frames-per-second. Pressing the spacebar cycles between the methods so you can compare the effects. When switching, the application animates the image as a visual indicator of the change.

Intel Skylake Processor Graphics

6th generation Intel Core processors provide superior two- and three-dimensional graphics performance, reaching up to 1152 GFLOPS. Its multicore architecture improves performance and increases the number of instructions per clock cycle.

The 6th generation Intel Core processors offer a number of all-new benefits over previous generations and provide significant boosts to overall computing horsepower and visual performance. Sample enhancements include a GPU that, coupled with the CPU's added computing muscle, provides up to 40 percent better graphics performance over prior Intel® Processor Graphics. 6th generation Intel Core processors have been redesigned to offer higher-fidelity visual output, higher-resolution video playback, and more seamless responsiveness for systems with lower power usage. With support for 4K video playback and extended overclocking, it is ideal for game developers.

GPU memory access includes atomic min, max, and compare-and-exchange for 32-bit floating-point values in either shared local memory or global memory. The new architecture also offers a performance improvement for back-to-back atomics to the same address. Tiled resources include support for large, partially resident (sparse) textures and buffers. Reading unmapped tiles returns zero, and writes to them are discarded. There are also new shader instructions for clamping LOD and obtaining operation status. There is now support for larger texture and buffer sizes. For example, you can use up to 128k × 128k × 8B mipmapped 2D textures.

Bindless resources increase the number of dynamic resources a shader may use, from about 256 to 2,000,000 when supported by the graphics API. This change reduces the overhead associated with updating binding tables and provides more flexibility to programmers.

Execution units (EUs) have improved native 16-bit floating-point support as well. This enhanced floating-point support leads to both power and performance benefits when using half precision.

Display features further offer multiplane overlay options with hardware support to scale, convert, color correct, and composite multiple surfaces at display time. Surfaces can additionally come from separate swap chains using different update frequencies and resolutions (for example, full-resolution GUI elements composited on top of up-scaled, lower-resolution frame renders) to provide significant enhancements.

Its architecture supports GPUs with up to three slices (providing 72 EUs). This architecture also offers increased power gating and clock domain flexibility, creating a powerful game delivery system.

Building and Running the Application

Follow these steps to compile and run the example application.

  1. Download the ZIP file containing the source code for the example application, and then unpack it into a working directory.
  2. Open the lesson6_gpuCpuSynchronization/lesson6.sln file by double-clicking it to start Microsoft Visual Studio 2013.
  3. Select <Build>/<Build Solution> to build the application.
  4. Upon successful build you can run the example from within Visual Studio.

Once the application is running, a main window open and you will see an image. The console window shows what method was used to render it and the current milliseconds-per-frame and number of frames-per-second. Pressing the spacebar cycles between the methods and compares the performance difference. Pressing ESC exits the application.

Code Highlights

The application uses three calls to force synchronization, as well as the unsynchronized approach. The various combinations are stored in an array that is created during the initialization phase.

 // Array of structures, one item for each option we're testing
#define I(x) { options:: ## x, #x }
struct options {
    enum  { NONE, READPIXELS, FLUSH, FINISH, nOPTS } option;
    const char* optionStr;
} options[]
{
    I(NONE),
        I(READPIXELS),
        I(FLUSH),
        I(FINISH),
};

To test this, the application creates a vertex and fragment shader, plus loads textures into VRAM.

// compile and link the shaders into a program, make it active
    vShader = compileShader(vertexShader, GL_VERTEX_SHADER);
    fShader = compileShader(fragmentShader, GL_FRAGMENT_SHADER);
    program = createProgram({ vShader, fShader });
    offset = glGetUniformLocation(program, "offset");                            GLCHK;
    texUnit = glGetUniformLocation(program, "texUnit");                          GLCHK;
    glUseProgram(program);                                                       GLCHK;

    // configure texture unit
    glActiveTexture(GL_TEXTURE0);                                                GLCHK;
    glUniform1i(texUnit, 0);                                                     GLCHK;

    // create and configure the textures
    glGenTextures(1, &texture);                                                  GLCHK;
    glBindTexture(GL_TEXTURE_2D, texture);                                       GLCHK;
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);                GLCHK;
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);                GLCHK;
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);           GLCHK;
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);           GLCHK;

    // load texture image
    GLuint w, h;  std::vector<GLubyte> img; if (lodepng::decode(img, w, h, "sample.png"))
               __debugbreak();

    // upload the image to vram
    glBindTexture(GL_TEXTURE_2D, texture);                                       GLCHK;
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, w, h, 0, GL_RGBA,
                 GL_UNSIGNED_BYTE, &img[0]);                                     GLCHK;

Called once for each screen refresh, the display() method checks first whether we are switching between the methods (that is, animating). If not switching, it then uses the option pointed to in the array of options. Switching from method to method walks through this array.

void display()
{
    // attributeless rendering
    glClear(GL_COLOR_BUFFER_BIT);                                               GLCHK;
    glBindTexture(GL_TEXTURE_2D, texture);                                      GLCHK;
    if (animating) {
        glUniform1f(offset, animation);                                         GLCHK;
    } else {
        glUniform1f(offset, 0.f);                                               GLCHK;
    }
    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);                                      GLCHK;
    if (!animating)
    switch (options[selector].option) {
    case options::NONE:       break;
    case options::READPIXELS: glReadPixels(0, 0, w, h, GL_RGBA, GL_UNSIGNED_BYTE,&buffer[0]);  GLCHK;  break;
    case options::FLUSH:      glFlush();                                                        GLCHK;  break;
    case options::FINISH:     glFinish();                                                       GLCHK;  break;
    }
    glutSwapBuffers();
}

Each time a video frame is drawn, the performance output is updated in the console and the application checks whether the spacebar or ESC has been pressed. Pressing the spacebar causes the application to move through the non-synchronizing and synchronizing calls; pressing ESC exits the application. When switching, the performance measurements are reset and the image animates as a visual indicator that something changed. If no key was pressed, the next frame is rendered.

// GLUT idle function.  Called once per video frame.  Calculate and print timing
// reports and handle console input.
void idle()
{
    // Calculate performance
    static unsigned __int64 skip;  if (++skip 7lt; 512) return;
    static unsigned __int64 start; if (!start && !QueryPerformanceCounter((PLARGE_INTEGER)&start))                      __debugbreak();
    unsigned __int64 now;  if (!QueryPerformanceCounter((PLARGE_INTEGER)&now))
                                                                       __debugbreak();
    unsigned __int64 us = elapsedUS(now, start), sec = us / 1000000;
    static unsigned __int64 animationStart;
    static unsigned __int64 cnt; ++cnt;

    // We're either animating
    if (animating)
    {
        float sec = elapsedUS(now, animationStart) / 1000000.f; if (sec < 1.f) {
            animation = (sec < 0.5f ? sec : 1.f - sec) / 0.5f;
        }
        else {
            animating = false;
            selector = (selector + 1) % options::nOPTS; skip = 0;
            cnt = start = 0;
            print();
        }
    }

    // Or measuring
    else if (sec >= 2)
    {
        printf("frames rendered = %I64u, uS = %I64u, fps = %f,
               milliseconds-per-frame = %fn", cnt, us, cnt * 1000000. / us,
               us / (cnt * 1000.));
        if (swap) {
            animating = true; animationStart = now; swap = false;
        } else {
            cnt = start = 0;
        }
    }

    // Get input from the console too.
    HANDLE h = GetStdHandle(STD_INPUT_HANDLE); INPUT_RECORD r[128]; DWORD n;
    if (PeekConsoleInput(h, r, 128, &n) && n)
        if (ReadConsoleInput(h, r, n, &n))
            for (DWORD i = 0; i < n; ++i)
                if (r[i].EventType == KEY_EVENT && r[i].Event.KeyEvent.bKeyDown)
                    keyboard(r[i].Event.KeyEvent.uChar.AsciiChar, 0, 0);

    // Ask for another frame
    glutPostRedisplay();
}

Closing

Depending upon the game you are developing, it may not be possible to avoid calls that cause synchronization between the CPU and the GPU, especially if your application needs to interact with the pixels on the screen in some fashion or synchronize between different contexts. In general, it is best to avoid synchronization to get the most performance out of your system. This article has covered some of the calls that cause synchronization and suggested alternative approaches.

By combining this technique with the advantages of the 6th generation Intel Core processors, graphic game developers can ensure their games perform the way they were designed.

References

An Overview of the 6th generation Intel® Core™ processor (code-named Skylake)

Graphics API Developer’s Guide for 6th Generation Intel® Core™ Processors

About the Author

Praveen Kundurthy works in the Intel® Software and Services Group. He has a master’s degree in Computer Engineering. His main interests are mobile technologies, Microsoft Windows, and game development.

Code Sample: Access Control in JavaScript* for Intel® Joule™ development board

$
0
0

This code sample illustrates creating a simple alarm system using the development platform and an assortment of extensible sensors.

Once completed, the system will display alarm status on the connected display. Users will also interact with the alarm via a web interface that will allow them to enable or disable the alarm, as well as examine stored alarm data.

Source files and documentation are located on GitHub:  https://github.com/intel-iot-devkit/joule-code-samples/tree/master/access-control-js

Code Sample: Doorbell in JavaScript* for Intel® Joule™ development platform

Code Sample: Exploring C++ on the Intel® Joule™ development platform

Viewing all 1201 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>