Quantcast
Channel: Intel Developer Zone Articles
Viewing all 1201 articles
Browse latest View live

Thank you

$
0
0

 

Thank You. We have received your inquiry. You will receive a response within (1) business day. 


-Intel® Developer Zone Support Team


Intel® XDK FAQs - IoT

$
0
0

Where can I download the Intel XDK?

The Intel XDK main page includes download links for the Linux, Windows and OSX operating systems.

How do I update the MRAA library on my Intel IoT platforms?

The simplest way to update the mraa library on an Edison or Galileo platform is to use the built in "Update libraries on board" option which can be found inside the IoT settings panel on the Develop tab. See the screenshot below:

Alternatively, on a Yocto Linux image, you can update the current version of mraa by running the following commands from the Yocto Linux root command-line:

# opkg update
# opkg upgrade

If your IoT board is using some other Linux distribution (e.g. a Joule platform), you can manually update the version of mraa on the board using the standard npm install command:

# npm install -g mraa

...or:

$ sudo npm install -g mraa

...for a Linux distribution that does not include a root user (such as Ubuntu).

All command-line upgrade options assume the IoT board has a working Internet connection and that you are logged into the board using either an ssh connection or over a serial connection that provides access to a Linux prompt on your IoT board.

Can the xdk-daemon run on other Linux distributions besides Yocto?

The Intel XDK xdk-daemon is currently (November, 2016) only supported on the Yocto and Ostro Linux distributions. Work is ongoing to provide a version of the xdk-daemon that will run on a wider range of IoT Linux platforms.

How do I connect the Intel XDK to my board without an active Internet connection?

The Intel Edison Board for Arduino supports the use of an RNDIS connection over a direct USB connection, which provides a dedicated network connection and IP address. Other boards can connect to a local network using either a wireless or wired LAN connection. The wired LAN connection may require attaching a USB Ethernet adaptor to the IoT board, in order to provide the necessary physical wired Ethernet connection point. Access to your local network is all that is required to use an IoT device with the Intel XDK, access to the Internet (by the IoT board) is not a hard requirement, although it can be useful for some tasks.

Most Intel IoT platforms that are running Linux (and Node.js) can be "logged into" using a USB serial connection. Generally, a root Linux prompt is available via that USB serial connection. This serial Linux prompt can be used to configure your board to connect to a local network (for example, configure the board's wifi connection) using Linux command-line tools. The specific details required to configure the board's network interface, using the board's Linux command-line tools, is a function of the board and the specific version of Linux that is running on that board. Please see the IoT board's installation and configuration documentation for help with that level of setup.

How do I use a web service API in my IoT project from my main.js?

Your application's main.js file runs on a standard Node.js runtime engine; just as if you were in a server-based Node.js environment, you can create a simple HTTP server as part of your IoT Node.js app that serves up an index.html to any client that connects to that HTTP server. The index.html file should contain a reference to the JavaScript files that update the HTML DOM elements with the relevant web services data. You are accessing the index.html (HTML5 application) from the http server function in the main.js file. A web services enabled app would be accessed through a browser, via the IoT device's IP address.

See this blog, titled Making a Simple HTTP Server with Node.js – Part III, for more help.

Error: "Cannot find module '/opt/xdk-daemon/current/node-inspector-server/node_modules/.../debug.node"

In some IoT Linux images the xdk-daemon was not compiled correctly, resulting in this error message appearing when a debug session is started. You can work around this issue on an Edison or Galileo platform by using the "Upgrade Intel xdk-daemon on IoT device" option, which can be found in the IoT settings panel on the Develop tab. See the screenshot below:

Error: "Cannot find module 'mime-types' at Function.Module ..."

This error usually indicates than an npm install may not have completed correctly. This can result in a missing dependency at runtime for your IoT Node.js app. The best way to deal with this is:

  1. Remove the node_modules directory in the project folder on your development system.

  2. Switch to another Intel XDK project (if you don't have another project, create a blank project).

  3. Switch back to the problem project.

  4. Click the "Upload" icon on the Develop tab and you should be prompted by a dialog asking if you want to build.

  5. Click the build button presented by the dialog prompt in the previous step.

  6. Wait for a completion of the build, indicated by this message in the console:
    NPM REBUILD COMPLETE![ 0 ] [ 0 ]

Now you should be able safely run the project without errors.

Error: "Write Failed" messages in console log, esp. on Edison boards.

This can be caused by your Edison device running out of disk space on the '/' partition. You can check this condition by logging into the Edison console and running the df command, which should give output similar to this:

# df -h /
Filesystem  Size    Used    Available  Use%  Mounted on
/dev/root   463.9M  453.6M  0          100%  / 

A value of "100%" under the "Use%" column means the partition is full. This can happen due to a large number of logs under the /var/log/journal folder. You can check the size of those logs using the du command:

# cd /var/log/journal
# du -sh *
 11.6M 0ee60c06f3234299b68e994ac392e8ca
 46.4M 167518a920274dfa826af62a7465a014
  5.8M 39b419bfd0fd424c880679810b4eeca2
 46.4M a40519fe5ab148f58701fb9e298920da
  5.8M db87dcad1f624373ba6743e942ebc52e
 34.8M e2bf0a84fab1454b8cdc89d73a5c5a6b 

Removing some or all of the log files should free up the necessary disk space.

Be sure not to delete the /var/logs/journal directory itself!!

Back to FAQs Main 

Learning About Microsoft Azure* IoT Gateway SDK Modules

$
0
0

Introduction

The Microsoft Azure* IoT Gateway SDK gives you a myriad of tools that allow you to enable gateway devices in multiple ways. It can also enable almost any device with an internet connection to become a gateway. It works for both Windows* and Linux* development environments that have the ability to compile C and C++ code. It’s also extremely modular, having a lightweight core to drive the modules you can include within it. This core handles how the modules communicate with each other, apathetic of the environment it’s in. This broker is the key to having effective modules, and an overall effective gateway. It also simplifies communicating with the Microsoft Azure* cloud service.

Modules

The SDK includes several modules by default, which can help you understand their structure, application, and communication. They can also provide a good testbed for a gateway, or your custom modules. To begin we will take a look at some of the modules created by Microsoft, and how they play in to the SDK as a whole. Each module is “wrapped” differently depending on whether it’s being used in a Windows or Linux development environment, but is otherwise the same.

hello_world

The hello_world module is designed to output a simple “hello world” message to the systems log. The majority of the code is error checking, to ensure the environment is properly setup by the core features of the SDK before executing. This module is a publisher, this means its goal is to publish messages to the broker, which can then send the message to whoever needs it. In this case it simply sends “helloWorld", “from Azure IoT Gateway SDK simple sample!” to the broker, but other modules could send sensor information, or instructions.

Logger

The Logger module is another simple module, which listens to the broker for any messages, before saving them to a JSON file. The logger does not output any data to the broker, but instead retains all data that has been through the broker. It appends several things to them, like the time of the event the content and the origin of the message, before appending it to the specified JSON log file. This simple module is good to use for error checking, as it can show you in real time, the series of events that lead to an unexpected output.

Building Your “Gateway”

The SDK has a built in function to build your gateway. Of course it doesn’t assemble the hardware through magic blue smoke. Instead it builds the object that acts as a configuration for the SDK to properly interact with modules and hardware. This function takes input in the form of a JSON file, which needs to instruct the program which modules to include, and how the broker needs to act towards them.

Modules

The module portion of the JSON file must include three things. Firstly, the modules name, which must be unique in your SDK. The next field that must be filled out is the path of the module. This path must end in .so or .dll, depending on the operating system your using for your gateway. Finally you must enter the arguments your module is expecting. Modules like the logger expects an argument for the file it’s meant to output to. The hello_world module expects no arguments, and therefore you pass it “null”.

Links

This section of the JSON is used to instruct the broker on how to handle messages that are sent to it. Source modules are designated to send messages to the broker. Modules designated as sinks accept messages from the broker. In the case of the “Hello World” sample, the hello_world module is designated a source, and the logger module a sink. Below is an example of a JSON to build the gateway environment on a Linux host machine, taken from the Azure IoT Gateway SDK github.

Conclusion

A developer wishing to design their own module must look through the current modules provided by Microsoft. This article should have given you a good idea of the relationship between modules and the core of the SDK. Upon building your own modules, you need to take into consideration how they are going to react, and be applied in the world of the SDK, and the broker. The true power of a module lies in how it communicates to other modules to do what they need to. So go out there and design your own modules, test them out, and make your gateways act the exact way you want them to!

Resources

The primary source for this documentation is the Azure IoT Gateway SDK GitHub page.

Intel® RealSense™ SDK-Based Real-Time Face Tracking and Animation

$
0
0

Download Code Sample [ZIP 12.03 MB]

In some high-quality games, an avatar may have facial expression animation. These animations are usually pre-generated by the game artist and replayed in the game according to the fixed story plot. If players are given the ability to animate that avatar’s face based on their own facial motion in real time, it may enable personalized expression interaction and creative game play. Intel® RealSense™ technology is based on a consumer-grade RGB-D camera, which provides the building blocks like face detection and analysis functions for this new kind of usage. In this article, we introduce a method for an avatar to simulate user facial expression with the Intel® RealSense™ SDK and also provide the sample codes to be downloaded.

Figure 1: The sample application of Intel® RealSense™ SDK-based face tracking and animation.

System Overview

Our method is based on the idea of the Facial Action Coding System (FACS), which deconstructs facial expressions into specific Action Units (AU). AUs are a contraction or relaxation of one or more muscles. With the weights of the AUs, nearly any anatomically possible facial expression can be synthesized.

Our method also assumes that the user and avatar have compatible expression space so that the AU weights can be shared between them. Table 1 illustrates the AUs defined in the sample code.

Action UnitsDescription
MOUTH_OPENOpen the mouth
MOUTH_SMILE_LRaise the left corner of mouth
MOUTH_SMILE_RRaise the right corner of mouth
MOUTH_LEFTShift the mouth to the left
MOUTH_RIGHTShift the mouth to the right
  
EYEBROW_UP_LRaise the left eyebrow
EYEBROW_UP_RRaise the right eyebrow
EYEBROW_DOWN_LLower the left eyebrow
EYEBROW_DOWN_RLower the right eyebrow
  
EYELID_CLOSE_LClose left eyelid
EYELID_CLOSE_RClose right eyelid
EYELID_OPEN_LRaise left eyelid
EYELID_OPEN_RRaise right eyelid
  
EYEBALL_TURN_RMove both eyeballs to the right
EYEBALL_TURN_LMove both eyeballs to the left
EYEBALL_TURN_UMove both eyeballs up
EYEBALL_TURN_DMove both eyeballs down

Table 1: The Action Units defined in the sample code.

The pipeline of our method includes three stages: (1) tracking the user face by the Intel RealSense SDK, (2) using the tracked facial feature data to calculate the AU weights of the user’s facial expression, and (3) synchronizing the avatar facial expression through normalized AU weights and corresponding avatar AU animation assets.

Prepare Animation Assets

To synthesize the facial expression of the avatar, the game artist needs to prepare the animation assets for each AU of the avatar’s face. If the face is animated by a blend-shape rig, the blend-shape model of the avatar should contain the base shape built for a face of neutral expression and the target shapes, respectively, constructed for the face with the maximum pose of the corresponding AU. If a skeleton rig is used for facial animation, the animation sequence must be respectively prepared for every AU. The key frames of the AU animation sequence transform the avatar face from a neutral pose to the maximum pose of the corresponding AU. The duration of the animation doesn’t matter, but we recommend a duration of 1 second (31 frames, from 0 to 30).

The sample application demonstrates the animation assets and expression synthesis method for avatars with skeleton-based facial animation.

In the rest of the article, we discuss the implementation details in the sample code.

Face Tracking

In our method, the user face is tracked by the Intel RealSense SDK. The SDK face-tracking module provides a suite of the following face algorithms:

  • Face detection: Locates a face (or multiple faces) from an image or a video sequence, and returns the face location in a rectangle.
  • Landmark detection: Further identifies the feature points (eyes, mouth, and so on) for a given face rectangle.
  • Pose detection: Estimates the face’s orientation based on where the user's face is looking.

Our method chooses the user face that is closest to the Intel® RealSense™ camera as the source face for expression retargeting and gets this face’s 3D landmarks and orientation in camera space to use in the next stage.

Facial Expression Parameterization

Once we have the landmarks and orientation of the user’s face, the facial expression can be parameterized as a vector of AU weights. To obtain the AU weights, which can be used to control an avatar’s facial animation, we first measure the AU displacement. The displacement of the k-th AU

Dk is achieved by the following formula:

Where Skc is the k-th AU state in the current expression, Skn is the k-th AU state in a neutral expression, and Nk is the normalization factor for k-th AU state.

We measure AU states Skc and Skn in terms of the distances between the associated 3D landmarks. Using a 3D landmark in camera space instead of a 2D landmark in screen space can prevent the measurement from being affected by the distance between the user face and the Intel RealSense camera.

Different users have different facial geometry and proportions. So the normalization is required to ensure that the AU displacement extracted from two users have approximately the same magnitude when both are in the same expression. We calculated Nk in the initial calibration step on the user’s neutral expression, using the similar method to measure MPEG4 FAPU (Face Animation Parameter Unit).

In normalized expression space, we can define the scope for each AU displacement. The AU weights are calculated by the following formula:

Where Dkmax is the maximum of the k-th AU displacement.

Because of the accuracy of face tracking, the measured AU weights derived from the above formulas may generate an unnatural expression in some special situations. In the sample application, geometric constraints among AUs are used to adjust the measured weights to ensure that a reconstructed expression is plausible, even if not necessarily close to the input geometrically.

Also because of the input accuracy, the signal of the measured AU weights is noisy, which may have the reconstructed expression animation stuttering in some special situations. So smoothing AU weights is necessary. However, smoothing may cause latency, which impacts the agility of expression change.

We smooth the AU weights by interpolation between the weight of the current frame and that of previous frame as follows:

Where wi,k is the weight of the k-th AU in i-th frame.

To balance the requirements of both smoothing and agility, the smoothing factor of the i-th frame for AU weights, αi is set as the face-tracking confidence of this frame. The face-tracking confidence is evaluated according to the lost tracking rate and the angle of the face deviating from a neutral pose. The higher the lost tracking rate and bigger deviation angle, the lower the confidence to get accurate tracking data.

Similarly, the face angle is smoothed by interpolation between the angle of the current frame and that of the previous frame as follows:

To balance the requirements of both smoothing and agility, the smoothing factor of the i-th frame for face angle, βi, is adaptive to face angles and calculated by

Where T is the threshold of noise, taking the smaller variation between face angles as more noise to smooth out, and taking the bigger variation as more actual head rotation to respond to.

Expression Animation Synthesis

This stage synthesizes the complete avatar expression in terms of multiple AU weights and their corresponding AU animation assets. If the avatar facial animation is based on a blend-shape rig, the mesh of the final facial expression Bfinal is generated by the conventional blend-shape formula as follows:

Where B0 is the face mesh of a neutral expression, Bi is the face mesh with the maximum pose of the i-th AU.

If the avatar facial animation is based on a skeleton rig, the bone matrices of the final facial expression Sfinal are achieved by the following formula:

Where S0 is the bone matrices of a neutral expression, Ai(wi) is the bone matrices of the i-th AU extracted from this AU’s key-frame animation sequence Ai by this AU’s weight wi.

The sample application demonstrates the implementation of facial expression synthesis for a skeleton-rigged avatar.

Performance and Multithreading

Real-time facial tracking and animation is a CPU-intensive function. Integrating the function into the main loop of the application may significantly degrade application performance. To solve the issue, we wrap the function in a dedicated work thread. The main thread retrieves the new data from the work thread just when the data are updated. Otherwise, the main thread uses the old data to animate and render the avatar. This asynchronous integration mode minimizes the performance impact of the function to the primary tasks of the application.

Running the Sample

When the sample application launches (Figure 1), by default it first calibrates the user’s neutral expression, and then real-time mapping user performed expressions to the avatar face. Pressing the “R” key resets the system when the user wants to or a new user substitutes to control the avatar expression, which will activate a new session including calibration and retargeting.

During the calibration phase—in the first few seconds after the application launches or is reset—the user is advised to hold his or her face in a neutral expression and position his or her head so that it faces the Intel RealSense camera in the frontal-parallel view. The calibration completes when the status bar of face-tracking confidence (in the lower-left corner of the Application window) becomes active.

After calibration, the user is free to move his or her head and perform any expression to animate the avatar face. During this phase, it’s best for the user to keep an eye on the detected Intel RealSense camera landmarks, and make sure they are green and appear in the video overlay.

Summary

Face tracking is an interesting function supported by Intel® RealSense™ technology. In this article, we introduce a reference implementation of user-controlled avatar facial animation based on Intel® RealSense™ SDK, as well as the sample written in C++ and uses DirectX*. The reference implementation includes how to prepare animation assets, to parameterize user facial expression and to synthesize avatar expression animation. Our practices show that not only are the algorithms of the reference implementation essential to reproduce plausible facial animation, but also the high quality facial animation assets and appropriate user guide are important for better user experience in real application environment.

Reference

1. https://en.wikipedia.org/wiki/Facial_Action_Coding_System

2. https://www.visagetechnologies.com/uploads/2012/08/MPEG-4FBAOverview.pdf

3. https://software.intel.com/en-us/intel-realsense-sdk/download

About the Author

Sheng Guo is a senior application engineer in Intel Developer Relations Division. He has been working on top gaming ISVs with Intel client platform technologies and performance/power optimization. He has 10 years expertise on 3D graphics rendering, game engine, computer vision etc., as well as published several papers in academic conference, and some technical articles and samples in industrial websites. He hold the bachelor degree of computer software from Nanjing University of Science and Technology, and the Master’s degree in Computer Science from Nanjing University.

Wang Kai is a senior application engineer from Intel Developer Relations Division. He has been in the game industry for many years. He has professional expertise on graphics, game engine and tools development. He holds a bachelor degree from Dalian University of Technology.

SMB Platform Upgrade Instructions 10.8.0

$
0
0

Background

Release Notes

10.8.0

Download link

Download the following files from registrationcenter.intel.com. Use the serial number that is sent to you via email to begin the download process.

saffron-install_10_8_0.bin     
rhel6-setup-lite.tgz                

UpgradeSpecifics

  • Step 1. Uninstall your current version of SMB.
  • Step 2. Update the smbtool utility in the POSTOS Setup Script (rhsetup) IMPORTANT: This is a centralized logging-specific upgrade only. Follow the instructions below. 
  • Step 3: Install SMB 10.8.0.

General Upgrade Notes:

  • For site installations with both SMB and Streamline, ensure that the installation meets inter-product dependencies as defined by the Saffron Product Support team. 
  • The SMB installer maintains site-specific customizations in the ~saffron/smb/conf and ~saffron/smb/catalina/conf directories.
  • In general, only saffron user permissions (not root) are required to upgrade the SMB installer.
  • DO NOT RUN THE RHSETUP SCRIPT. See the instructions below for extracting rhsetup and then installing the updated smbtool utility. The root user is required to perform this upgrade.
  • Saffron recommends testing this first in your development environment to verify operation with site-specific environment and components.

Upgrade SMB 

Pre-Steps

1.  Review the 10.8.0 Release Notes for additional information that extend the instructions here. If the upgrade is over multiple releases, e.g., from 10.2.0 to 10.4.0, review each intervening set of release notes for specific installation instructions.  

2.  Download the SMB installer (saffron-install_10_8_0.bin) using the link provided in your registration email and copy to the head node of the SMB cluster in the Saffron home directory. This can be done in several ways and might be site-specific, but Saffron recommends using as few hops as possible to put the SMB installer in the ~saffron directory. 

3.  Ensure that the provided SMB installer is executable. Log in as the saffron user and enter the following command:

$ chmod u+x saffron-install_10_8_0.bin 
 

Uninstall SMB

4.  Shut down the cluster from the admin node. Log in as the saffron user and enter the following commands:

cluster stop

5.  Create a cluster-wide global backup of the Space configurations. Use the SMB archive utility. For example:

archive -g -p archivefilename -d bkup      

This tells the archive utility to make a cluster-wide global back up and place it in the bkup directory.

An archive file called archivefilename-20161017120511 is created.

If you need to restore the global backup in the future, enter the following command:

archive -r archivefilename bkup/archivefilename-20161017120511

For more information, refer to the archive information by entering the following on the command line:

man archive

6. Uninstall the current release. As the saffron user, enter the following command:

uninstall

Answer the prompt verifying the uninstall with yes.
 

Update the smbtool utility

7.  The smbtool utility (for systems tasks) in the PostOS setup (rhsetup) script has been updated to include the new centralized logging feature. Execute the following steps on all nodes in the cluster. Log in as the root user.

a.  Download rhsetup (rhel6-setup-lite.tgz) using the link provided in your registration email.

b.  Copy rhsetup into the /tmp directory of the admin node and each worker node.

cp rhel6-setup-lite.tgz /tmp 

c.  Untar rhsetup. 

tar xzf rhel6-setup-lite.tgz 

d.  Locate rhel6-setup/rpms/smbtool-8.0-8.x86_64.rpm.  

e.  Update smbtool.

rpm -e smbtool 
rpm -ivh smbtool-8.0-8.x86_64.rpm
 

Install SMB

8.  Run the installer. Logged in as the saffron user in the home directory, enter the following command:

./saffron-install_10_8_0.bin

This unpacks the installer including its embedded rpm for smb, runs a post install procedure, and copies out the software to all nodes in the cluster.

9.  Review configuration files in the following directories to see if they have been modified since the last upgrade. Modified files are appended with as-shipped. Be sure to diff your files with the changed files from the new installation. 

~saffron/smb/conf

~saffron/smb/catalina/conf     

Note: In this release, the following files have been modified from ~saffron/smb/conf:

admin-config.properties 

saffron.xml

advantage-config.properties (for users of SaffronAdvantage)

NOTE: Verify that the validFileRoots property is properly set to your File Source location (from where you ingest data). Failure to do so will result in all affected Spaces to go offline. See the SMB 10.8.0 Release Notes for information on setting this property.

10. Restart the mysql daemons on all nodes only if your mySQL server-specific configuration has changed; otherwise, it is not necessary.

As the saffron user, enter the following command:  

creset -r 

11. Restart the SMB cluster and Ganglia.  As the saffron user, enter the following commands:

cluster start

12. Restart Ganglia only if Gangila-specific configurations have been changed by the system administrator. For a general SMB update, this is not required.

ganglia restart 

13. Verify that you have version 10.8.0. Enter the following command:

cluster version

14. Verify operation of the new cluster.  Enter the following command:

cluster status

15. Verify in the latest log files in ~/smb/logs that no errors exist.

16. Log in to Saffron Admin and Saffron Advantage websites to verify proper operation. 

17. (Optional) If your site has site-specific jdbc jars (e.g., SQL Server jtds or Teradata drivers) or jars that extend SMB functionality, do the following as the saffron user:

cluster stop
cp -p dir_containing_jars/*.jar ~/smb/lib
rd    (The "rd" command syncs the worker node smb/lib directory with the head node.)  
cluster start

Repeat steps 14, 15, and 16.

Intel® IoT Gateway Developer Hub and Software Suite/Pro Software Suite Release Notes

$
0
0

This is the latest release notes for the Intel® IoT Gateway Developer Hub, Intel® IoT Gateway Software Suite, and Intel® IoT Gateway Pro Software Suite. 

Intel® IoT Gateway Developer Hub and Software Suite/Pro Software Suite Release Notes ARCHIVE

$
0
0

Use this ZIP file to access each available version of the release notes for the Intel® IoT Gateway Developer Hub, Intel® IoT Gateway Software Suite, and Intel® IoT Gateway Pro Software Suite, beginning with production version 3.1.0.17 through the currently released version. The release notes include information about the products, new and updated features, compatibility, known issues, and bug fixes.

Saffron Technology™ Cognitive API FAQ

$
0
0
What are synchronous and asynchronous APIs?

Saffron’s Thought Processes (THOPs) are user-defined functions that allow you to tie together various Saffron MemoryBase (SMB) and other capabilities using a scripting language. Thought Processes can help perform a wide variety of actions such as: simplifying the execution of complex sequential queries, calling outside applications to be used in conjunction with Saffron reasoning functions, or converting query results into data suitable for UI widgets.

A key feature of Saffron's thought processes is that they can run synchronously or asynchronously.

By default, Saffron APIs run synchronous thought processes. A synchronous process typically runs via an HTTP GET call with the calling client and then waits for the result. Use synchronous THOPs when when you use the default (single-threaded) WebService engine which is known as THOPs 1.0. This process returns results one at a time; thus, it is slower than asynchronous processes. Still, this process is better for developers who need to troubleshoot or debug issues. Typically, synchronous thought processes (THOPs 1.0) are used for the following operations:

  • simple single queries
  • fast operations
  • operations that do not need asynchronization
  • troubleshooting and debugging in a development environment

Saffron APIs can also run asynchronous thought processes. These processes communicate with calling clients through messages in real time and can operate as long-running operations. Asynchronous APIs are only available with the latest Saffron WebService engine known as THOPS 2.0. This process is much faster than synchronous processes. Typically, asynchronous thought processes (THOPs 2.0) are used for the following operations:

  • complex queries that cannot be expressed in a single query
  • business logic when writing apps based on SUIT and THOPs
  • integrating Saffron APIs and third-party APIs
  • using a stored procedure in a relational database
  • deploying code at runtime

Learn more about thought processes.

What are Batch APIs?

Batch APIs allow you to run the same API (or set of APIs) repeatedly over a large number of items. The Batch API collects a large number of items such as records, rows, ids, and attributes. For each item, a Batch API calls one of the core APIs (such as similarity, classification, recommendation) to complete the process.

A key component of batch APIs are the thought processes under which they run. Thought processes (THOPs) are stored procedures that can run synchronously or asynchronously.

By default, Saffron APIs run synchronous thought processes. A synchronous process typically runs via an HTTP GET call with the calling client and then waits for the result. Use synchronous THOPs when when you use the default (single-threaded) WebService engine which is known as THOPs 1.0. This process returns results one at a time; thus, it is slower than asynchronous processes. Still, this process is better for developers who need to troubleshoot or debug issues.

Example Synchronous APIs:

Saffron APIs can also run asynchronous thought processes. These processes communicate with calling clients through messages in real time and can operate as long-running operations. Asynchronous APIs are only available with the latest Saffron WebService engine known as THOPS 2.0. This process is much faster than synchronous processes.

Example Asynchronous APIs:

What is a signature?
A signature is a list of attributes (category:value) that best characterizes a query item. It represents the most informative and relative attributes for that item. Once the signature is found, it can be used to provide useful and relevant comparison data.
What is the difference between the Classify Item API and the Nearest Neighbor API?

Both APIs can find the classification of a query item. For example, assume that we want to find out the classification (type) of animal:bear. The way to find the answer differs among the two APIs.

The Classify Item API gathers a list of attributes (signature) that best represents the animal:bear. Next, it finds classifications (or types) that are similar to the bear by comparing the attributes of the classifications against the signature of the bear. It then returns the top classification values based on these similar items.

The Nearest Neighbor API also gathers a list of attributes (signature) that best represents the animal:bear. It is different in that it uses the similarity feature to find similar animals (as opposed to finding similar classifications). From the top list of animals that are the most similar to the bear, the API initiates a voting method to return the top classification values.

When should the Classify Item API be used versus the Nearest Neighbor API?

The decision to use the Classify Item API or the Nearest Neighbor API depends on the available ingested data. Datasets that contain a high percentage of one particular classification negatively affect both the algorithm and probability if the Classify Item API is used. Because the data is swayed towards the same type, the query item could be incorrectly labeled. In this situation, the Nearest Neighbor API can cut through too much weight by finding neighbors that are similar to the query item. Even if it finds only one neighbor, that could be enough to get a correct label.

For example, assume that a dataset contains 100 animals. Of these, 60% are classified as invertebrates and 20% are classified as mammals. In spite of the weighted list, we can use the Nearest Neighbor API to find the classification of animal:bear by finding another animal that shares the attribute produces:milk. Since mammals are the only animals that produce milk, we can accurately conclude that the bear is a mammal.

What does "confidence" measure?

Confidence is a measuring tool in the Classification API suite that answers how confident the algorithm is with a classification decision (I am 99% confident that the bear can be classified as a mammal). It is the algorithm's self-assessment (or rating) of a prediction based on the amount of evidence it has. Typically, low confidence indicates a small amount of evidence in the dataset. Examples of evidence might include similarity strength, homogeneity of the neighborhood, information strength, and/or disambiguation level between classes.

The Classification APIs use confidence to:

  • automatically remove the low confidence records by human intervention
  • correct human mistakes
  • detect anomalies
  • better extrapolate overall accuracy from the "truth" set to a "training" set

Note: Do not confuse confidence with real accuracy or with Statistical Confidence.

How do "percent" and "similarity" influence "confidence" when using the Nearest Neighbor API to classify an item?

Confidence is the ultimate metric in that it indicates how confident we are that a query item is properly classified. Percent and similarity are used as evidence to compute confidence. Similarity indicates how similar a query item is to its nearest neighbors and percent shows how many of the neighbors have the same classification (or type). So, in a case where a query item has lots of nearest neighbors and those neighbors are the same type, we can conclude with a high level of confidence that the query item shares the same classification as its nearest neighbors.

Confidence levels decrease as the percent and/or similarity values decrease. A lower percentage indicates that not all of the nearest neighbors share the same classification. A lower similarity score indicates that some of the attributes of the nearest neighbors do not closely match the query item. It also indicates that some of the attributes have low "score" values, which means that they are not as relevant to selecting a classification.

What is the metric score in a signature? Why is it important?

For classification APIs, the metric score measures the relevance of an attribute (in a signature) for predicting the classification of a query item. A higher metric score (1) means an attribute has a higher predictive value against the label of the query item.

For example, assume that we are attempting to classify animal:bear. The classification API returns a list of attributes (signature) that characterizes the bear in hopes that we can find similar attributes that will help us classify it. The attribute behaves:breathes has a lower metric score (.5) because it does not help us narrow down the classification of the bear (mammals, reptiles, amphibians, and other types have the same attribute). The attribute produces:milk has a higher metric score (1) because it provides very useful and accurate information that can help us properly classify the bear. Since our data indicates that all animals with the produces:milk attribute are mammals, we can also label the bear as a mammal.

The higher a metric score is for attributes in a signature, the greater the chances of making an accurate classification. For similarity, a higher score means a better chance of finding similar items.

How can I learn more about APIs?
Refer to our API section of SMB documentation.
How can I learn more about thought processes (THOPs)?

Saffron’s Thought Processes (THOPs) are user-defined functions that allow you to tie together various Saffron MemoryBase (SMB) and other capabilities using a scripting language. Thought Processes can help perform a wide variety of actions such as: simplifying the execution of complex sequential queries, calling outside applications to be used in conjunction with Saffron reasoning functions, or converting query results into data suitable for UI widgets. THOPs are written in one of the supported script languages (in v10 of SMB, only JavaScript is available).

If you are familiar with other database products, think of Thought Processes as stored procedures. THOPs can be created via the REST API or by using the developer tools in Saffron Admin. Once a Thought Process is defined, it becomes callable through the standard REST API.

Learn more about thought processes.


Visualize Data Using Tableau®

$
0
0

Tableau® is a data visualization tool that renders data. Saffron Technology™ uses Tableau's web data connector to visualize output from our APIs. To run the APIs, follow these instructions.

Run an API from Tableau

Prerequisites: Install the Docker container for the Web SDK.  <link>.

  1. Run the Tableau Desktop (version 10 or later).
  2. Go to Connect > To a Server > Web Data Connector

      ​
  3. In the Web Data Connector window, enter the URL for the web data connector in the following format:

    http://{serverHostname}:{port}/web-sdk/connectors/tableau/html/connector.html

    For example: http://sales01.saffrontech.intel.com/web-sdk/connectors/tableau/html/connector.html

      Enter URL for the Web connector in Tableau
  4. Press ENTER on your keyboard.
  5. Enter information about the Saffron Web Services and the API you want to run, as shown in the following example.

       
    Server URL
    Enter the URL that points to the saffron-ws server in the following format:
    http://{saffronWSHostname:{port}.
    For example: http://sales01.saffrontech.intel.com:8888
    Space
    Enter the space name.
    For example: zoo
    &API
    Enter the name of the API you want to use.
    For example: async_batch_classifications
    Params
    Enter the URL the parameters applicable to the API. See the API documentation for more information.

    For example:

    - filterby: "animal": "cat"
    - nresults: 100
    - signaturesize: 25
    - signaturesm: 2
    - similaritysm: 2
  6. Click Submit.
    Tableau connects to the web data connector.

    Tableau when it has connected to the server
      
  7. Click Update Now.
    The data is displayed in a table.

    Data as displayed in table form in Tableau
     
  8. Select Sheet1 and build your custom visualization. 

For more information about creating dashboards, watch the Tableau video on YouTube.

Saffron Technology™ Elements

$
0
0

Saffron Technology™ Elements is a robust developer solution that includes APIs, Widgets, and other items that enable you to take advantage of Saffron's many offerings.

APIs

Saffron Technology APIs enable you to include our offerings in your environment.

View our APIs.

Widgets

Saffron Technology widgets include items such as bar charts, interactive tables, and heat maps that provide visual analytics you can embed in your application.

View our Widgets.

Visual Analytics

Saffron Technology uses the Tableau® web data visualization tool to visualize output from our APIs.

View information about Visual Analytics.

MIT License

$
0
0

Copyright (c) 2016

Intel Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Close Calls, Hard Choices, and a Dog Named Waffles: Developing Overland*

$
0
0

Download [PDF 0.98 MB]

Close calls and hard choices punctuate the gameplay of Overland, a post-apocalyptic, turn-based survival strategy game from Finji. Overland’s interface is surprisingly intuitive, and takes little time to learn. Each random level is a procedurally generated tile populated by strange creatures and an evolving cast of survivors, encountered during a cross-country road-trip that leads straight into trouble. Just how much trouble depends on the choices players make. Some can be deadly. Some help you survive. Some involve a dog. No apocalypse would be complete without a dog named Waffles, right?

In other words, Overland is great fun, and has the awards to prove it, including Best Art Design in the 2016 Intel® Level Up Contest.

The Developers’ Journey

Finji was founded in 2006 by Rebekah and Adam Saltsman—two veteran indie game-developers, and parents to two young boys—who run the thriving game studio out of their Michigan home. The Saltsmans had a lot to say about how they turned a 2D iPad*-based prototype into a 3D cross-platform, touch-enabled PC game for the Intel Level Up Contest. They also shared what it’s like balancing parenthood with game development, the role alt-games storefront itch.io played in helping them test and refine gameplay, and the importance of building a game engine that supports fast, iterative prototyping.


Figure 1: UI elements give players easy-to-understand choices through overlays of icons inspired by US National Park Service signs.

Overland Origins

“The original whiteboard doodle that inadvertently spawned Overland was a mashup of 868-HACK*, by Michael Brough, and XCOM: Enemy Unknown*, by Firaxis Games,” Adam told us. Like many game developers, the Saltsmans are students of gaming. They draw inspiration and take away lessons from every game they’ve ever played.

As freelancers, they have more than a decade of experience applying those lessons to game design, art, and code for other studios. They’ve also released titles of their own, starting with a Flash*-based game called Gravity Hook*, followed by an iOS* game called Wurdle*. Between 2009 and 2013, they created six more iOS titles. The award-winning Hundreds*, a popular puzzle game, relied on the multi-touch interaction capabilities of the iPad.

“When we did Hundreds, there wasn’t much hardware support for multi-touch interaction outside of an iPad,” Adam said. Mobile gaming quickly became a very crowded space to compete in, so Bekah and Adam knew they would need to diversify by creating cross-platform PC games. “We’d spent 18 months collaborating with four other developers to make Hundreds. Financially, it did fine, but we couldn’t port it to PC (in 2013) because it was a multi-touch game.”

If they were going to plunge into the world of PC gaming, they knew they needed more resources. So, the Saltsmans focused on contract work. “We built up a war-chest of money,” Bekah said. “The question was: how far could it get us?”

The Saltsmans knew that what they were about to do was part-and-parcel of being indie developers. They’d seen their friends go through periods of having no health insurance and limited income before finding success. “We had kids and a mortgage. The prospect of investing everything we’d made in a cross-platform title was terrifying,” Bekah said.

The Prototype

Overland started as a 2D iPad game. “We prototyped most of the core gameplay in the current version of Overland in about two weeks, for a few thousand dollars,” Adam explained. Then they sat on it for six months. “We didn’t make any significant investments in it during that time. Instead, we kept refining the rules and adding new features. We wanted to get to a point where we could hand it off to a stranger, or a journalist, and have them get excited about experiencing it, even though it was a hot mess of missing elements, UI … the usual stuff.”


Figure 2: Gameplay takes place on board-game-like tiles where everything is easy to comprehend.

The couple also knew that to succeed as a PC game, the gameplay had to have, as Adam put it, “real strategy legs.” Art direction, sound, and story would be crucial, because many elements typically used in strategy game designs—RPG elements and tech trees, for example—were out of bounds for this project, for a variety of reasons.

“We were founding members of the Austin indie game collective,” Bekah said. “So we would take the Overland prototype—which was this horribly ugly 2D grid—to meetups where game-developers, journalists, and enthusiasts could give us feedback. That was invaluable."

Rules to Design By

“Weird things happen when you reduce a strategy game down to board-game-like spaces,” Adam said. “It ends up having a lot in common with puzzle games. This is actually reinforced by research that uses CAT scans and MRI technology to look at different parts of the brain during action or casual gameplay.”

According to Adam, however, it was one year into development before he realized that Overland’s level generator had a lot in common with a puzzle generator. That discovery led to three core design-principles that drive level creation. “As a post-apocalyptic road-trip game, Overland is a big design space—as soon as you tell someone about it, they have five cool ideas to add to the game. We used three design principles to vet ideas, and decide ­which ones were worth implementing.”

They call the first principle “the Minotaur in a china shop,” after a game in which a Minotaur enters a china shop, knocks something over, and then goes into a rage, destroying everything in the store. In Overland, this idea is used to determine whether a design idea will lead to a place where a sloppy move by a player can start a chain reaction that produces interesting consequences.

“It’s a principle that’s more interesting than a design in which you come to a level with three turns. On the third turn, you die. That would be like a poison-gas level,” Adam explained. “That’s not very Overland-y. Whereas a level in which an enemy chases you, you injure it, and then get put in a position where it’s a poison-gas level, that’s something you’d see in Overland. Because it’s the result of something the player did.”

The other principles go hand-in-hand. Randomness is fine, as long as it’s the player’s fault; the player gets a proportional warning about incoming random events. Each level is created on the fly by the game engine, which randomly combines ingredients to produce a fun and exciting experience for the player based on where they are in the country, and other factors.

“For example, one of the core mechanics of the game is that when you make noise, it attracts more creatures that will try to chase you down,” Adam said. “When that happens, you get a two-turn warning about where a new creature is going to appear. That’s because new creatures can be really bad. We want players to have some windup.”

Another example is that on a windy day, fire will spread, even if there’s nothing flammable for it to spread to, so players get a one-turn warning: this tile is heating up. Such “random” events aren’t random at all. “They are unforeseen, or very hard to foresee, non-random consequences of player decisions. For example, there’s a monster here. It’s too close to kill with the weapon in hand, so I’m going to kill it by setting it on fire. Except now there’s a fire that can spread throughout the tile if weather conditions permit.”

All of this creates a lot of opt-in complexity. “Players get to decide how much trouble they want to participate in,” Adam said. “Our team was too small to build a game with two-layers of difficulty, one easy, the other hard,” Bekah added. "The way people can experience more difficulty in their Overland runs, is by choosing to venture further from the road.”


Figure 3: Opt-in difficulty is based on whether a player chooses to drive into more, or less, danger.

Building complexity into the core gameplay ratcheted up the tension. “I love that a slow-paced game can give people adrenaline jitters,” Bekah said. “Even when a player dies in Overland, they’re laughing about it.”

Team-Based Collaboration, Fast-Paced Iteration

Overland’s art, coding, sound, and gameplay are the collaborative effort of a core team of four—Bekah, Adam, art director Heather Penn, and sound designer Jocelyn Reyes. “I think of our design process as old-school game design,” Adam said. “We all wear multiple hats, no one works in a silo.” It’s an approach that encourages cross-discipline collaboration. For example, Penn’s art influences Reyes’ sound design, and vice versa. Everyone contributes gameplay ideas.

“If someone has an idea, we prototype it to see if it works,” Bekah said. Pitching solutions instead of ideas is encouraged. “We try to craft solutions to nagging issues—for example, a graphics problem that will also solve a gameplay issue.” A value is assigned to how long it might take, and if it’s within reason, it gets developed. “We all contribute to this very iterative, prototype-intensive process,” Adam said.

The Overland team isn’t afraid to spend development cycles making pipeline course corrections. “I’d rather spend a week fixing the system, than two days building a system Band-Aid,” Adam said. “Having a game engine that allows us to quickly prototype in this really cool iterative way, with a team of people, is invaluable to how we’re building Overland.”

Tools

Overland is being built in Unity*, which Adam estimated would save them two years of 3D engineering work. “The tradeoff for using a closed-source tool was worth it.” They’re running Unity under Mac OS* on their development system, a late 2012 iMac* with Intel inside. Unity also gives them easy cross-platform portability.

They use Slack* for team collaboration. Or as Adam put it, “Overland would not exist without Slack, period.” They’re using SourceTree* and GitHub* with the Git LFS (Large File Sizes) extension for audio and graphics source file control; while mainstay art tools such as Adobe* Photoshop* and Autodesk* Maya* are being used to create the assets that Unity’s game engine pulls into its procedurally generated levels. Wwise* from Audiokinetic is Overland’s interactive sound engine.

Early Access Play Testing

Another crucial element in honing Overland’s gameplay came in the form of itch.io, an alternative games platform that provided Bekah and Adam the ability to dole out limited early-access to builds, and get feedback from users. One of itch.io’s benefits was its automatable command-line utility for uploading code patches. “Itch.io uses code assembled from open-source components like rsync that can generate a patch and upload it for you,” Adam explained. “The whole build script to generate a build for Windows* 64, Windows 32, Linux* Universal, and Mac OS Universal, and then upload it to itch.io, took an hour or two. And half of that time was spent figuring out how to print a picture in ASCII.”


Figure 4: Heather Penn’s award-winning art design drew on a variety of influences, including American artist Edward Hopper. Scenes were crafted to take advantage of shaders that would look great across a variety of systems.

A Level Up

The Saltsmans learned of the Intel Level Up Contest through friends who happened to be former winners. Those friends reported having great experiences with the contest, and working with Intel. As a result, the Saltsmans didn’t hesitate to enter, even though Overland was a work-in-progress that still used a lot of placeholder art. That art was so gorgeous it earned Overland top honors in the Art Design category, in a year that saw more entries than ever before.

The Intel Level Up Contest required entries to be playable on a touch-enabled Razer* Blade Stealth, which has a 4K-resolution display. Unity 5.3.6 was instrumental in enabling Overland’s 4K shadows, which on some systems were blowing out video memory at that resolution. Overland makes use of Intel® HD Graphics, because, as Adam put it, “we want our work to be accessible to as wide an audience as possible. Part of that is game design, but part of it is supporting as wide a range of hardware as we can.”


Figure 5: Adam and Bekah Saltsman demo Overland in the Intel Level Up booth at PAX West, in Seattle.

As part of that philosophy, Adam wants his games to look equally great whether they’re played on a state-of-the-art VR rig, or on an aging desktop. “Ninety-five percent of Overland runs at something like 500 fps on a five-year-old Intel® Core™ i3 processor, which I know, because that’s what’s in my dev system.” As they get closer to release, Adam plans on optimizing his code to spread the workload across cores.

Another key requirement of the contest was that games needed to be touch-enabled. Overland was touch-enabled from the start. “It was a mobile game, with mobile game controls,” Bekah said, before admitting that the current builds are no longer touch-screen friendly. “Touch was a fundamental part of the game’s design for the first 18 months,” Adam explained. “I’m a touch-screen interaction perfectionist, and there were things about our focused state and information previewing that needed attention. I’m looking forward to bringing it back.”

Balancing Game Development and Kids

With two young kids at home, Bekah and Adam built Finji with raising a family in mind. “When we both had ‘real’ jobs,” Bekah said, “each of us wanted to be the stay-at-home parent. It took a really long time before we could introduce children to our chaos.” Bekah describes balancing work and kids as being “different all the time. They’re five and three. The youngest is about to start pre-school, so this will be the first year both kids won’t be home during the day.”

The studio where Adam works is downstairs in their home, facing the back yard. Bekah’s office faces the front yard. “If the kids are outside, one of us can keep an eye on them while we’re working. There are always times when one of us has to jump up mid-whatever we’re doing, and stop them from whatever mischief they’re getting into. In that way, we need to be flexible.”

Conclusion

Overland is a work-in-progress that started life as a 2D tablet-based prototype. Winning Best Art Design in the 2016 Intel Level Up Contest has not only raised Overland’s profile among the game community, but also opened the door for access to Intel software tools and optimization expertise, particularly in multithreading code. Although no release date has been set for Overland, Finji has big plans for Q4 2016, when they will begin implementing new levels and features. The game has garnered plenty of awards in its pre-release state—who knows what accolades might follow?

testing sample update

Simple, Powerful HPC Clusters Drive High-Speed Design Innovation

$
0
0

Up to 17x Faster Simulationsthrough Optimized Cluster Computing

Scientists and engineers across a wide range of disciplines are facing a common challenge. To be effective, they need to study more complex systems with more variables and greater resolution. Yet they also need timely results to keep their research and design efforts on track.

A key criterion for most of these groups is the ability to complete their simulations overnight, so they can be fully productive during the day. Altair and Intel help customers meet this requirement using Altair HyperWorks* running on high performance computing (HPC) appliances based on the Intel® Xeon® processor E5-2600 v4 product family.


 

Download Complete Solution Brief (PDF)

OpenCL™ Drivers and Runtimes for Intel® Architecture

$
0
0

What to Download

By downloading a package from this page, you accept the End User License Agreement.

Installation has two parts:

  1. Intel® SDK for OpenCL™ Applications Package
  2. Driver and library(runtime) packages

The SDK includes components to develop applications.  Usually on a development machine the driver/runtime package is also installed for testing.  For deployment you can pick the package that best matches the target environment.

The illustration below shows some example install configurations. 

 

SDK Packages

Please note: A GPU/CPU driver package or CPU-only runtime package is required in addition to the SDK to execute applications

Standalone:

Suite: (also includes driver and Intel® Media SDK)

 

 

Driver/Runtime Packages Available

GPU/CPU Driver Packages

CPU-only Runtime Packages  

Deprecated 

 


Intel® SDK for OpenCL™ Applications 2016 R2 for Linux (64-bit)

This is a standalone release for customers who do not need integration with the Intel® Media Server Studio. It provides components to develop OpenCL applications for Intel processors. 

Visit https://software.intel.com/en-us/intel-opencl to download the version for your platform. For details check out the Release Notes.

Intel® SDK for OpenCL™ Applications 2016 R2 for Windows* (64-bit)

This is a standalone release for customers who do not need integration with the Intel® Media Server Studio.  The Windows* graphics driver contains the driver and runtime library components necessary to run OpenCL applications. This package provides components for OpenCL development. 

Visit https://software.intel.com/en-us/intel-opencl to download the version for your platform. For details check out Release Notes.


OpenCL™ 2.0 GPU/CPU driver package for Linux* (64-bit)

The Intel intel-opencl-r3.1 (SRB3.1) Linux driver package  provides access to the GPU and CPU components of these processors:

  • Intel® 5th, 6th, or 7th generation Intel® Core™ processors
  • Intel® Celeron® J4000 and Intel® Celeron® J3000
  • Intel® Xeon® processor v4 or v5 with Intel® Graphics Technology (if enabled by OEM in BIOS and motherboard)

Installation instructions

Intel has validated this package on CentOS 7.2 for the following 64-bit kernels.

  • Linux 4.7 kernel patched for OpenCL 2.0

Supported OpenCL devices:

  • Intel® graphics (GPU)
  • CPU

For detailed information please see the driver package Release Notes.

 

 

For Linux drivers covering earlier platforms such as 4th generation Intel Core processor please see the versions of Media Server Studio in the Driver Support Matrix.


OpenCL™ Driver for Iris™ graphics and Intel® HD Graphics for Windows* OS (64-bit and 32-bit)

The Intel graphics driver includes components needed to run OpenCL* and Intel® Media SDK applications on processors with Intel® Iris™ Graphics or Intel® HD Graphics on Windows* OS.

You can use the Intel Driver Update Utility to automatically detect and update your drivers and software.  Using the latest available graphics driver for your processor is usually recommended.


See also Identifying your Intel Graphics Controller.

Supported OpenCL devices:

  • Intel graphics (GPU)
  • CPU

For the full list of Intel® Architecture processors with OpenCL support on Intel Graphics under Windows*, refer to the Release Notes.

 


OpenCL™ Runtime for Intel® Core™ and Intel® Xeon® Processors

This runtime software package adds OpenCL CPU device support on systems with Intel Core and Intel Xeon processors.

Supported OpenCL devices:

  • CPU

Latest release (16.1.1)

Previous Runtimes (16.1)

Previous Runtimes (15.1):

For the full list of supported Intel® architecture processors, refer to the OpenCL™ Runtime Release Notes.

 


 Deprecated Releases

Note: These releases are no longer maintained or supported by Intel

OpenCL™ Runtime 14.2 for Intel® CPU and Intel® Xeon Phi™ Coprocessors

This runtime software package adds OpenCL support to Intel Core and Xeon processors and Intel Xeon Phi coprocessors.

Supported OpenCL devices:

  • Intel Xeon Phi coprocessor
  • CPU

Available Runtimes

For the full list of supported Intel architecture processors, refer to the OpenCL™ Runtime Release Notes.


Use Case: Intel® Edison Board to Microsoft Azure* Part 1

$
0
0

Once you've moved past the prototype development stage, you might find yourself in the position to deploy an actual IoT solution for your business product.

Let’s say you own a transportation of goods company that has to deliver food and other temperature sensitive products to shops throughout the country. Storage and transportation conditions such as temperature and moisture contribute greatly to the loss of food, as it provides favorable conditions for pests or mold multiplications. One very efficient solution to this problem is to use IoT devices such as the Intel® Edison board to capture the temperatures in these storage devices, the gateway to gather the information and route it appropriately, and Microsoft Azure* to store the information and analyze it so you can get valuable feedback.

The following use case will provide you with an example of how to implement an IoT solution so that value can be gained through an IoT deployment, using the power of interconnectivity between the board, the gateway and the cloud. We will dive into detailing the implementation process of the prototype of our solution, with this use case in mind.

Using Microsoft Azure*, the Intel® Edison board, and Intel® IoT Gateway Technology

To create a high value solution for the temperature problem described, we need to setup the board, the gateway and Azure*. The following sections address setup for Intel® Edison boards, Microsoft Azure, and Wyse* 3000 Series x86-Embedded Desktop Thin Client.

Intel® Edison board

The Intel® Edison board runs a simple Yocto* linux distribution and can be programmed using Node.js*, Python*, Arduino*, C, or C++. For this use case we used the Intel® Edison board and Arduino breakout board, a Seeed* Studio Grove* Starter Kit Plus (Gen 2), a base shield, and many sensors to get started with.

The first time you use your Intel® Edison board you have to configure it, flash it to the latest firmware, and test it. In order to do so you can access this simple Getting Started Guide.

Working with the IDE and Wi-Fi

Now that you have set up your Intel® Edison board, you can start programming it. You can choose the programming language and IDE that you want and you have to load up one of the blink examples in order to test if everything is set:

In order to finish the setup you need to follow a few more steps:

  1. Open the device manager again and find out on which COM port is the Intel® Edison Virtual COM Port and set the port in the IDE.

  2. Load up the blink example and if everything went well so far, the on-board LED should start blinking.

  3. To connect with Wi-Fi, open Putty, and once you login, type configure_edison --wifi.

  4. Work through the setup process and use the guide to connect to your Wi-Fi network (See Get Started with Intel® Edison on Windows).

Microsoft Azure

Setting up an Azure Event Hub

For this setup, the free trial version will be used.

  1. You need to sign in with a Microsoft account. After signing in, click on Sign up for a free trial. In the next tab, you have to fill in information about yourself, including credit card data. Do not worry, you will not be charged if you do not specifically say so. After clicking the Sign up button at the bottom of the page, the Azure Portal Homepage will appear. Click on New in the top left corner - Internet of Things – Event Hub.

  2. First you need to create a namespace, click on Service Bus on the left and then CREATE at the bottom

  3. Fill in the required information and then click the check sign

  4. Now our namespace was created. Next, we need to create the Event Hub. Make sure you have selected the Service Bus tab on the left and then click on New on the bottom-left

  5. Next, select App Services – Service Bus – Event Hub

  6. Next, click on Quick create and fill in the required information

  7. Now you should be able to see on your screen something like this in your portal, under the Service Bus tab :

  8. You will need to further configure your Event Hub. Click on your namespace, and you’ll be prompted with the following window:

  9. Click on Event Hubs tab, you will see on your screen your Event Hub:

  10. Click on the event hub and then click on Configure

  11. You will now need to create a new shared access policy for your Hub, make sure you select Manage from the permissions tab

  12. Now you will have to create a consumer group. At your created Hub, click on the Consumer Groups tab and then click on Create.

  13. Name your group and click the checkmark

  14. Your namespace must also have a key defined, return to the namespace (the Service bus tab on the left), click on Configure and then, under Shared access policies type in a key name and make sure you check Manage

  15. Click on the Service bus tab again, then click on Connection information at the bottom (make sure you have selected your namespace)

  16. Here you can see your namespaces and their keys:

  17. If you return to your Event Hub’s dashboard after you begin sending messages to it from the gateway, you will see something like this:

The event hub dashboard is just a way to check if the data was received correctly, see the bandwidth of those messages and check if you are getting errors. To work with the data you will need to read it from the event hub through an application using the many different SDKs that Microsoft offers (for an example see the section below on How to use features of Azure services - Notifications). Also, a full explanation of how to read data from Intel® Edison board and sending it to the just created Event Hub using an IoT Gateway can be found in the How to take the developed solution from the board to the gateway and to the cloud section.

SDKs for Microsoft Azure IoT

The most popular language to develop apps for Azure is C# but if you want to use other languages and platforms visit GitHub.

Setting up the gateway

Wyse* 3000 Series x86-Embedded Desktop Thin Client

Regulatory model number: N03D

The gateway connects legacy and new systems, and enables seamless and secure data flow between edge devices and the cloud. We take data from the Intel® Edison board and send it to the gateway, and the gateway sends the data to the cloud as an event.

In the next section we detail the setup process for the gateway.

The new Wyse* 3000 Series x86-embedded thin client has a powerful performance at an entry-level price. It has a dual-core Intel processor with 1.6GHz, an integrated graphics engine, and multiple connectivity choices. Its various configuration options support a wide variety of peripherals and interfaces, along with unified communications platforms such as Lync 2010, Lync 2013 and the Skype for Business client for Lync 2015 (UI mode) plus high-fidelity protocols such as RemoteFX and Citrix* HDX.

In order to get started, see the quick start guide. To get your thin client up and running, you must do the following:

  1. Make sure that the thin client and the monitor are turned off and disconnected from AC power. Place the thin client on the desk after you attach feet for vertical or horizontal position. Assemble the VESA mount with user-supplied screws, and insert the thin client; try to put the cables facing down or to the side.

  2. Make all desired connections. In order to connect to a network, you can use a Base-T Ethernet network cable. If your device is equipped with a SFP slot, use a SFP module, or use an optional Wi-Fi network adapter for wireless networks.

  3. Connect the power adapter to the thin client power input before connecting to a 100-240V AC. You must wait until the power button light turns off, and then, for turning on the thin client, you need to press the power button. After the initialization sequence is complete, the activity light changes to green.

This article is continued in Industrial Use Case Part 2.

Additional Links:

From Intel® Edison to Microsoft Azure*

What is Azure*?

Getting Started with Microsoft Azure*

Use Case: From Intel® Edison to Microsoft Azure* Part 1

Use Case: From Intel® Edison to Microsoft Azure* Part 2

NetUP Uses Intel® Media SDK to Help Bring the Rio Olympic Games to a Worldwide Audience of Millions

$
0
0

In August of 2016, half a million fans came to Rio de Janeiro to witness 17 days and nights of the Summer Olympics. At the same time, millions more people all over the world were enjoying the competition live in front of their TV screens.

Arranging a live TV broadcast to another continent is a daunting task that demands reliable equipment and agile technical support. That was the challenge for Thomson Reuters, the world’s largest multimedia news agency.

To help it meet the challenge, Thomson Reuters chose NetUP as its technical partner, using NetUP equipment for delivering live broadcasts from Rio de Janeiro to its New York and London offices. In developing the NetUP Transcoder, NetUP worked with Intel, using Intel® Media SDK, a cross-platform API for developing media applications on Windows*.

“This project was very important for us,” explained Abylay Ospan, founder of NetUP. “It demonstrates the quality and reliability of our solutions, which can be used for broadcasting global events such as the Olympics. Intel Media SDK gave us the fast transcoding we needed to help deliver the Olympics to a worldwide audience.”

Get the whole story in our new case study.

Intel® HPC Developer Conference 2016 - Session Presentations

$
0
0

The 2016 Intel® HPC Developer Conference brought together developers from around the world to discuss code modernization in high-performance computing. For those who may have missed it or if you want to catch presentations that you may have missed, we have posted the Top Tech Sessions of 2016 to the HPC Developer’s Conference webpage. The sessions are split out by track, including Artificial Intelligence/Machine Learning, Systems, Software Visualization, Parallel Programming and others.

Artificial Intelligence/Machine Learning Track

Systems Track


High Productivity Languages Track

Software Visualization Track


Parallel Programming Track

 

Intel® Deep Learning SDK Tutorial: Installation Guide

$
0
0

Download PDF [792 KB]

Training Tool Installation Guide

Contents

1. Introduction

2. Prerequisites

3. Installing the Intel® Deep Learning SDK Training Tool from a Microsoft Windows* or Apple macOS* Machine 5

4. Installing the Intel® Deep Learning SDK Training Tool on a Linux* Machine

1. Introduction

The Intel® Deep Learning SDK Training Tool can be installed and run on Linux* Ubuntu 14.04 or higher and Cent OS 7 operating systems.

The Training Tool is a web-application that supports both local and remote installation options. You can install it to a Linux server remotely from a Microsoft Windows* or Apple macOS* machine using the installation .exe or .app file respectively. Alternatively you can install it locally on a Linux machine running the installation script.

You don’t need to install any additional software manually as the installation package consists of a Docker* container that contains all necessary components including the Intel® Distribution of Caffe* framework with its prerequisites and provides the environment for running the Training Tool.

2. Prerequisites

Make sure you comply with the following system requirements before beginning the installation process of the Intel® Deep Learning SDK Training Tool.

  • A Linux Ubuntu* 14.04 (or higher) or Cent OS* 7 machine accessible through a SSH connection.
  • Root privileges to run the installation script on the Linux machine.
  • Google Chrome* browser version 50 or higher installed on the computer which will be used to access the Training Tool web user interface.

The system requirements are also available in the Release Notes document that can be found online and in your installation package.

3. Installing the Intel® Deep Learning SDK Training Tool from a Microsoft Windows* or Apple macOS* Machine

To install the Intel® Deep Learning SDK Training Tool from a Microsoft Windows* or Apple macOS* machine, download the installation package from https://software.intel.com/deep-learning-sdk, unpack and launch the TrainingToolInstaller executable file to start the wizard.

The macOS and Windows installation wizards look similar and contain exactly the same steps.

The wizard launches with the steps that guide you through the installation process and advance as you click the Next button. The installation process includes the following steps:

Once you define all the settings, you can check the connection to the server by pressing the Test connection button. If the server is accessible, the test will result in the Connection successful status:

  1. Welcome and License Agreement. Read carefully and accept the License Agreement to continue with the installation.
  2. Defining Settings. This panel configures the installation parameters including network and security settings. Specify all required field values and modify default ones if needed:
    • Training Tool password– Password to access the Training Tool web interface
    • Server name or IP– The address of the Linux machine in which the Training Tool will be installed
    • User name (with root access)– User name with root privileges in the Linux server
    • User password–The password of the above user account. These credentials are needed for user authentication in the installation process.
    • Private key file for user authentication– The private key used for user authentication when password authentication is not allowed on the Linux server
    • Proxy server for http– Set the proxy server IP for HTTP if the connection between the Windows/Mac machine and Linux server is through a proxy
    • Proxy server for https– Set the proxy server IP for HTTPS if the connection between the Windows/Mac machine and Linux server is through a proxy
    • Mount file system path– Linux file system path which is to be mounted as a volume inside the Docker container
    • Web application port - Network port to access the Training Tool web interface.

    Once you define all the settings, you can check the connection to the server by pressing the Test connection button. If the server is accessible, the test will result in the Connection successful status:

  3. Installing. Click the Install button to begin the installation. The Installing panel comes up to show the progress of the installation with a progress bar that depicts the status of the current step and the overall completeness:

    When the indicator becomes 100%, click the activated Next button to complete the installation.

  4. Complete. Congratulations! You have installed the Intel® Deep Learning SDK Training Tool on your Linux server.

    Click the Open now button to open the Training Tool web interface in your browser, or the Download link to download the latest version of the Google* Chrome browser, or the Close button to close the window.

4. Installing the Intel® Deep Learning SDK Training Tool on a Linux* Machine

You can install the Intel® Deep Learning SDK Training Tool on a Linux* operating system using the installation script. Download the script from https://software.intel.com/deep-learning-sdk and run with the following available options:

1. volume <path>

Linux file system path which will be mounted as a volume inside the Docker* container

2. toolpassword <password>

admin password to access the Training Tool web interface

3. toolport <port>

network port to access the Training Tool web interface

4. httpproxy <proxy>

proxy server for HTTP

5. httpsproxy <proxy>

proxy server for HTTPS

6. help 

print help message

NOTE: While the parameter is mandatory and must be set to continue the installation, other parameters are all auxiliary and can be omitted.

Long license checkout on remote workstations

$
0
0

Problem:

License checkout for 2106 and newer product versions is very slow compared to the 2015 version on machines with remote access to the floating license server.

Environment:

Windows*

Root Cause:

Due to issues with the license caching in the 2015 product versions, it was disabled in the 2016 version.  The caching provided a temporary local copy of the license for frequent checkouts, but would only allow features available in the cached license to be checked out, invalidating other licenses.  Without the caching, checkout requests over the network can be very slow.

Workaround:

There is no workaround to re-enable the caching in current versions.  Try minimizing license checkouts by grouping files in the same compile command line.

 

Viewing all 1201 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>