Quantcast
Channel: Intel Developer Zone Articles
Viewing all 1201 articles
Browse latest View live

Beating Tune Out with Useful Content

$
0
0

Guiding the Developer Path

If you want to win the hearts and minds of developers in their I-want-to-know, I-want-to-do, and I-have-to-fix moments, you’ll need to do more than just show up. 

You need to be useful and meet their needs in those moments. That means connecting developers to what they’re looking for in real time and providing relevant information when they need it. Users gravitate toward brands with snackable, educational content.

Only 9% of users will stay on a mobile site if it doesn’t satisfy their needs (for example, to find a solution to their coding problem). With an average of XX% of our developers on mobile, that's a significant portion of your traffic. Without utility, they will not only move on in the moment, they actually might not ever come back.

69% of users agree that the quality, timing, or relevance of a company’s message influences their perception of a brand.

Our most popular content is centered around the “how-to” search. It’s the “I need to fix a performance issue” moment or the “I want to add a new feature to my app” moment. This is where video content can play a huge role, since it allows them to learn at their own pace, often with step-by-step instructions. 

 

"I want it NOW"

That sounds like something a toddler in the terrible twos would say, but it’s also what our audience is saying. They want immediate gratification, and they’re working faster than ever before. How can you improve the content and simplify the flow to get developers to what they want quickly?

 

Eliminate Steps

Think about the goal of your site: are you trying to drive awareness, downloads, registrations, registrations, or consumption? Everything you do should have a singular focus. Start with that goal and think about how you can cut the number of steps a user must take to reach it. 

 

Anticipate Needs

Being quick also involves knowing what the developer wants BEFORE they want it. Put your big stuff first. You aren't writing a mystery novel where everything will be revealed at the last moment. The goal of every page should be easy to understand and the first thing developers see. You may have secondary goals, but they should never interfere.

 

Do a Reality Check

Grab your phone and try a few of these tasks. Even better, find someone that isn't familiar with your content and ask them to perform these tasks. How well does your content hold up? Can you streamline it further?

  1. Think of the key action you want a user to take. How long did it take to perform?
  2. Think of the most searched-for topics for your area. Try those searches. Are you there and do you like what you see?
  3. Find one of your new articles. How long does it take to read? 
  4. Think about which elements on your site are absolutely, positively, undeniably essential for your developers. How fast can you find it?
  5. Does every page you go to clearly state its goal at the top? Is anything else adding to the clutter?
  6. Can you easily remember the top key points about your content (your 15sec pitch)?
  7. If you scan down the page quickly, what do you remember seeing?
  8. Are you fighting with yourself or other Intel properties for developer attention in search results? 

 


How to move the Intel Software License Manager to a new server

$
0
0

Moving the Intel® Software License Manager involves many of the same steps as performing the initial install.  It will prevent license checkout until the steps are completed.

  1. Download the Intel® Software License Manager User's Guide.
  2. Determine the hostname and host ID for the new server.  Instructions here.
  3. Log into the Intel® Registration Center.
  4. Download the latest version of the Intel Software License Manager and copy it to your new server.
  5. Modify the host information for your license by following the instructions here.
  6. Download your new server license to your new server.  The default folder used by the license server is /opt/intel/licenses/ for Linux* and [Program Files]\Common Files\Intel\ServerLicenses\ for Windows*.
  7. Download the client license file.
  8. Run the license manager installer according to the instructions in the User's Guide and provide the new license file\folder.
  9. After starting the license manager (lmgrd) process, make sure that the lmgrd and INTEL vendor daemon ports are not blocked by a firewall.
  10. Update the client machines to access the new server.  Check the INTEL_LICENSE_FILE environment variable.
    1. If it uses port@host, change them to the new server.  Most likely only the host needs to change.  
    2. If it contains a path, check the path for the floating license.  Remove this floating license, and replace it with the client license file downloaded from the registration center in step 6.  

For additional support, please file a ticket via the Online Service Center.

Intel Solutions and Technologies for the Evolving Data Center

$
0
0

 

One Stop for Optimizing Your Data Center

From AI to Big Data to HPC: End-to-end Solutions

Whether your data center is data- or compute-intensive and whether it serves cloud, high-performance computing, enterprise, storage, networking, or big data analytics, we have solutions and technologies to make your life easier. 

Explore

 

Data center managers, integrators, and developers can now optimize the entire stack to run faster and more efficiently on Intel® architecture. The Intel® Xeon® and Intel® Xeon Phi™ product family paired with Intel® Solid State Drives and NVMe* storage provide a strong foundation. Intel is committed to a standardized, shared platform for virtualization including SDN/NFV (networking), while providing hardware-based security and manageability for now and in the future.

But Intel is more than a hardware innovator. Regardless of your challenges, Intel provides optimized industry SDKs, libraries, and tuning tools. And these tools are supplemented by expert-provided training plus documentation including code samples, configuration guides, walk-throughs, use cases, and support forums.
 

 

AI: MACHINE LEARNING AND DEEP LEARNING

Intel supports rapid innovation in artificial intelligence focusing on community, tools, and training. Starting with the Intel® Nervana™ AI Academy, this section of the Intel® Software Developer Zone drills down into to computational machine learning and deep learning, with extensive Intel-optimized libraries and frameworks along with documentation and tutorials.

The Deep Learning Training Tool Beta helps you easily develop and train deep learning solutions using your own hardware. It can ease your data preparation, as well as design and train models using automated experiments and advanced visualizations.

Tools available include:
BigDL open source distributed library for Apache Spark*
Intel® Distribution for Python*
Deep Learning Webinar

 

MODERN CODE

You’ve no doubt heard of recent hardware innovations of the Intel® Many Integrated Core Architecture (Intel® MIC) including the multilevel extreme parallelism, vectorization and threading of the Intel® Xeon® and Intel® Xeon Phi™ product family. Plus, there are larger caches, new SIMD extensions, new memory and file architectures and hardware enforced security of select data and application code via Intel® Software Guard Extensions (Intel® SGX).

But they all require code and tool changes to get the most from the data center. To address this, Intel provides training and tools to quickly and easily optimize code for new technologies.

Extensive free training on code improvements and parallel programming is available online and by workshops and events.

Tools available include:
Intel® Parallel Studio XE (vectorization advisor and MPI profiling)
Intel® Advisor (vectorization optimization and threading design tool)
Intel® C/C++ Compilers and Intel® Fortran Compilers
Intel® VTune™ Amplifier XE (performance analysis of multiple CPUs and FPUs)
Application Performance Snapshot Tool

 

BIG DATA ANALYTICS

When handling huge volumes of data, Intel can help you provide faster, easier and more insightful big data analytics using open software platforms, libraries, developer kits and tools that take advantage of the Intel Xeon and Intel Xeon Phi product family’s extreme parallelism and vectorization. Fully integrated with popular platforms (Apache* Hadoop*, Spark*,R, Matlab* Java*, and NoSQL), Intel optimizations have been well-tested and benchmarked.

Extensive documentation is available on how real-life developers are using Intel hardware, software, and tools to effectively store, manage, process, and analyze data.

The Intel® Data Analytics Acceleration Library (Intel® DAAL) provides highly-optimized algorithmic building blocks and can be paired with the Intel® Math Kernel Library (Intel® MKL) containing optimized threaded and vectorized functions. In fact, the TAP Analytics Toolkit (TAP ATK) provides both Intel® DAAL and Intel® MKL already integrated with Spark.

 

HIGH-PERFORMANCE STORAGE

Intel is at the cutting edge of Storage not only with Intel® SSDs and NVMe but by working with the open source community to optimize and secure the infrastructure. Training is available at Intel® Storage Builders University.


Major tools available include:
Intel® Intelligent Storage Acceleration Library (Intel® ISA-L)
Storage Performance Development Kit (SPDK)
Intel® QuickAssist Technology
Intel® VTune™ Amplifier
Storage Performance Snapshot
Intel® Cache Acceleration Software (Intel® CAS)

 

SDN/NFV NETWORKING

Besides providing a standardized open platform ideal for SDN/NFV (virtualized networking) and the unique hardware capabilities in Intel’s network controllers, Intel has provided extensive additions to, and testing of, the Data Plane Development Kit (DPDK) and training through Intel® Network Builders University. Check out the thriving community of developers and subscribe to the 'Out of the Box' Network Developers Newsletter.

   

HPC AND CLUSTER

If you run visualization or other massive parallelism applications, you know the advantages of using the Intel Xeon and Intel Xeon Phi product family with MCDRAM and associated NUMA/Memory/Cache Modes, wide vector units and up to 68 cores. While the Intel® Scalable System Framework (Intel® SSF) and Intel® Omni-Path Architecture (Intel® OPA) focus on performance, balance and scalability, Intel is working with research and production HPC and clusters to support integration with all the major stacks as well as developing code and tools to optimize and simplify the work.

The Intel® HPC Orchestrator provides a modular integrated validated stack including the Lustre* parallel file system. It is supplemented by critical tools for cluster optimization:

Intel® Trace Analyzer and Collector which quickly finds MPI bottlenecks
Intel® MPI Library and docs to improve implementation of MPI 3.1 on multiple fabrics
MPI Performance Snapshot to help with performance tuning.
Intel® VTune™ Amplifier XE for performance analysis of multiple CPUs, FPUs and NUMA

 

 

Conclusion

Regardless of your job title and data center activities, Intel helps streamline and optimize your work to gain a competitive edge with end-to-end solutions, from high-performance hardware to new technologies, optimizations, tools and training. See what resources Intel provides to optimize and speed up your development now and remain competitive in the industry.

Explore

The New Issue of The Parallel Universe is Here: Transform Sequential C++ Code to Parallel with Parallel STL

$
0
0

Get your hands on the new issue of The Parallel Universe, Intel’s quarterly magazine that explores inroads and innovations in software development.

This issue’s feature article, Parallel STL: Boosting Performance of C++ STL Code, gives an overview of the Parallel Standard Template Library in the upcoming C++ standard (C++17) and provides code samples illustrating its use.

This issue’s other hot topics include:

  • Happy 20th Birthday, OpenMP*: Making parallel programming accessible to C/C++ and Fortran* programmers
  • Solving Real-World Machine Learning Problems with Intel® Data Analytics Acceleration Library: Models are put to the test in Kaggle competitions
  • HPC with R: The Basics: Satisfying the need for speed in data analytics
  • BigDL: Optimized Deep Learning on Apache Spark*: Making deep learning more accessible

Read it now >

Deploy an SDN Wired/Wireless Network with Open vSwitch* and Faucet*

$
0
0

By Shivaram Mysore

Overview

This article describes a Software Defined Networking (SDN) enabled wireless network using Intel hardware, Open vSwitch* (OvS) and Faucet*, which is an open source SDN controller. Instructions on how to configure this network are included.

Why an SDN-Enabled Network?

Unlike a traditional L2/L3 switch, an SDN-enabled switch provides for control and data plane separation. In the above illustration, Faucet represents a controller, and OvS can represent a data plane switch. With the use of standards-based OpenFlow* protocol, you don’t have to write device-specific drivers to handle various switch data paths. Additional advantages include the following:

  • Upgrade controller in <1 sec while the network is still running, without having to reboot the hardware; help prevent zero-day attacks
  • Easy automation and integration with YAML-based configuration
  • Configurable learning; example: unicast flooding
  • Configurable routing algorithms
  • ACLs, policy-based forwarding (PBF), policy-based routing (PBR) based on OpenFlow matches
  • Stacking of vendor-agnostic switches (fabric)
  • High availability via Idempotency
  • Scalability
  • Data plane for network functions virtualization (NFV)
  • NFV offload support: DHCP, DNS, NTP, BGP, Radius, and more
  • Dynamic segmentation based on 802.1x
  • Real-time network statistics and flow information (at most a 10 second delay). Statistics are in time-series Influx database so that one can look at the network historically. Flows are stored in CouchDB*.
  • Applications are written to Apache CouchDB™ and InfluxDB* APIs for network state information without causing network switch performance overhead.

Refer to Faucet presentations and articles for more detailed information.

Faucet Network Deployment

The figure above shows the network configuration. The Intel® processor-based server hosts the OvS (v2.6) software on Ubuntu* v16.10 to serve as the OpenFlow switch data plane. Another Intel® Celeron® processor-based box (QOTOM Mini PC) running Ubuntu 16.10 serves as the host for the Faucet and Gauge* controller. A pfSense*-based router is used for network isolation.

Setting Up the Software

PfSense

Download and install the open source PfSense router software on an Intel processor-based box (example: QOTOM Mini PC). Most of the required services run out of the box on installation.

Faucet and Gauge Controller

  1. Install Ubuntu 16.10 server on an Intel processor-based box (example: QOTOM Mini PC). Alternatively, a virtual machine or Docker image can be used. Refer to the Faucet website for more information.
  2. After installation, as user root, git clone the repository.
    $ sudo su
    $ cd ~/
    # git clone https://github.com/shivarammysore/faucetsdn-intel/
    # cd ~/faucetsdn-intel/src/scripts/install
  3. Run the script to set up Faucet, CouchDB, Grafana* Server for Gauge.
    # cd ~/faucetsdn-intel/src/scripts/install
    # ./install_4faucet.sh
  4. Make sure to update the configuration files with the correct Datapath ID (dp_id) and port information. For more information on modifying the files, check out the Faucet YouTube* demo videos :
    # /etc/ryu/faucet/faucet.yaml
    # /etc/ryu/faucet/gauge.yaml


    Restart services as needed.
    # systemctl restart faucet
    # systemctl restart gauge
  5. This should start the Faucet and Gauge services. Note the IP address of the machine. Faucet runs on port 6653 and Gauge on port 6654.

Installing Open vSwitch on Intel x86

This section lists the steps for getting OvS working as a software switch. Here we will make sure all the connectivity and software stack works.

  1. Install the Ubuntu 16.10 server on the Intel processor-based server as shown in the figure above. You only need to install the OpenSSH* basic system utilities packages.
  2. After installation, as user root, git clone the repository
    $ sudo su
    $ cd ~/
    # git clone https://github.com/shivarammysore/faucetsdn-intel/
    # cd ~/faucetsdn-intel/src/scripts/install
  3. Edit the installation script - Step1_install_u16_10s_pkgs.sh
    1. Things to edit - USER_LIST - name of the user on the system
    2. Check all the interface names to make sure that they match your system.
  4. Run the script to install the required packages, and then set up Docker. Note: For simplification, at this time Docker is not used, so you may want to consider commenting out the Docker section as appropriate.
    # ./Step1_install_u16_10s_pkgs.sh
  5. Edit the ovswitch.properties file, which is self-descriptive.
    1. Make sure IPV6 = false and DPDK = false
    2. Refer to the above figure for various port names and relationships.
  6. Run the script to set up OvS
    # ./Step3_setup_ovs.sh
  7. Make sure that the value of DATAPATH_ID in ovswitch.properties is that same as the one in /etc/ryu/faucet/faucet.yaml file. This tells the controller which switch it needs to manage and monitor.
  8. If everything is set up right, OvS is running and should already be managed by the controller.

Wireless

  1. Connect any wireless access point, such as the low cost TP-LINK TL-WA855RE, to one of the OpenFlow ports managed by the OvS bridge.
  2. Because the LAN cable terminates on an OpenFlow port, any client connected to the wireless AP will be served by OpenFlow-enabled OvS and controlled by Faucet.

Summary

In this article, we learned how simple it is to configure an Enterprise grade, fully programmable SDN Wired/Wireless network using off-the-shelf Intel x86 Hardware and Faucet SDN Controller software. This setup enables on-demand programming for security and network operations scenarios.

Questions and Support

About the Author

Shivaram Mysore has been a serial entrepreneur, proven results oriented business leader and recognized technical expert in Security (Cryptography, Identity, Web Services, XML) & Networking (SDN). He has worked for companies such as Sun Microsystems (Sunlabs/JavaSoft), Microsoft, Infoblox, consulted for fortune 500 companies apart from contributing to development of many industry standards at W3C, ONF, PC/SC Workgroup and ANSI . Currently, he contributes as a core team member to Faucet SDN related open source initiatives and helping organizations deploy SDN.  He can be reached via shivaram dot mysore at gmail dot com.

References

How to Set Up Your Intel® NUC Kit

$
0
0

Setting up your new hardware, once purchased, can be a daunting experience. This article will demonstrate how simple it is to set up and complete your new Intel® NUC kit.

Intel® NUC is a Big Player in a Tiny Box

The Intel® NUC Kit NUC6i7KYK with its sleek and compact form-factor is a work-horse with Intel® Iris® Pro graphics. The Intel® NUC has proven to be a tiny, powerful PC that is great for gaming, living rooms, tradeshows, festivals, and anywhere where space is at a premium. This PC platform has many advantages for the game developer. It is easy to customize, configure, and take with you anywhere where space is at a premium.

Selecting the right hardware and then setting it up correctly can prove to be a daunting experience. This article will demonstrate how simple it is to set up and complete your new Intel® NUC kit.

Intel® NUCs are Fully Customizable

You may be wondering why the Intel® NUCs are shipped as incomplete, bare-bones PCs. Because it is user choice. The buyer may need two massive SSDs in Raid 0 for maximum storage in a compact form-factor. Need extra memory? Having a small, customizable PC with the power of a 6th generation Intel® Core™ i7-6770HQ processor is what many enthusiasts need.

Since the Intel® NUC ships without a hard drive or memory, it is up to the user to purchase compatible hardware. For memory, this can be confusing. Thankfully, most motherboard manufacturers include a list of compatible memory hardware. Simply visit the System Memory for Intel® NUC Kit NUC6i7KYK page, select the memory amount and speed you need, then copy and paste the part number into your shopping cart where you would purchase memory. For example: HX421S13IBK2/16 is a 16GB kit of memory available at many retailers.

Next you will need an M.2 drive (or two!) such as the Intel® SSD 600p Series. They can be configured in Raid 0 for extra speedy configurations, but we won’t be covering RAID setups today.

That’s all the hardware you’ll need to set up your Intel® NUC. Now onto the fun part - installation.

Unboxing your Intel® NUC

Start by taking your Intel® NUC out of the packaging. Make sure the power cable is not plugged in. There are 4 screws on the bottom – loosen them! Note that the screws will not come all the way out of the chassis lid.

Tools you will need

Once you have removed the lid you’ll be greeted by a nice bare board ready for components to be slotted in.

Meet your unboxed system

Start by inserting your memory into the DDR4 DIMM slots. When it’s completed, it should look like the picture below. The memory should make a satisfying snap sound to confirm it has been properly seated and the contacts should mostly be hidden.

Intel® NUC with memory installed

Here is an up-close image:

A closer view of the Intel® NUC with memory installed

Next we’ll install an M.2 hard drive. Locate the M.2 HDD installation point and remove one of the anchoring screws. The screws are opposite the side of the machine where the memory is installed. Unlike the screws for the chassis lid, these screws will fully come out of their sockets.

View of the Intel® NUC – M.2 HDD installation point

Once the screw is removed, insert the M.2 drive into its socket. Screw the screw back in to hold the drive in place. Once completed, it should resemble the below picture.

Intel® NUC with M.2 drive installed

With both the M.2 SSD and the memory installed, your Intel® NUC hardware installation is complete!

View of the Intel® NUC with memory and M.2 drive installed

Installing Windows* on your Intel® NUC

The next step is to begin your Windows* installation. Microsoft offers the Media Creation Tool in order to aid system builders in creating their own Windows 10 installation media. Note that the Utility will wipe your drive so move any needed information off of the drive before you proceed. Follow the instructions on the Microsoft website to download and install Windows.

After completing the installation of Windows, it is a good idea to make sure your BIOS is up to date. The most current BIOS can be found on Intel’s Driver and Support website.

After you download and run the BIOS update, it can take up to 3 minutes to install and will reboot your machine. Do not attempt to power off, reboot, or remove the power cable while the BIOS is updating!

The final step is to run Windows Update. Windows Update will retrieve the latest Intel drivers, or you can download the latest drivers from Intel’s Driver and Support website (the same website linked previously).

And you’re done! Install your development environment and create awesome content!

Summary

In this article, we discussed the benefits of the Intel® NUC for users and particularly for gaming enthusiasts. We stepped through the installations of memory and hard drive and finally the installation of Windows on your Intel® NUC.

References

  1. Mighty Meets Mine: Intel® Skull Canyon

About the Author

Landyn Pethrus is an engineer at Intel, avid gamer, and hardware enthusiast.He specializes in fountain sniping opponents with Ancient Apparition in Dota 2* and slaying bosses in World of Warcraft*.

Introduction to the Zephyr* Real-Time Operating System (RTOS) with the Intel® Quark™ microcontroller D2000

$
0
0

Overview

This article introduces you to the the Zephyr* RTOS and explains how to configure it for the Intel® Quark™ microcontroller D2000.

Zephyr* RTOS with the Intel® Quark™ microcontroller D2000

Welcome to The Zephyr* RTOS with the Intel® Quark™ microcontroller D2000! Intel is now building embedded microcontrollers. They’ve taken the Pentium® processor and taken it down to microcontroller size to be the heart of small battery-powered devices. The Intel® Quark™ microcontroller D2000,  based on Intel’s lowest power Pentium® processor, is designed to control battery-powered electronics like wireless sensors and wearables. To support development with the Intel® Quark™ microcontroller D2000 and to make it easy to build devices with other Intel® Quark™ microcontrollers and beyond, Intel worked with the Linux Foundation* to build a real-time operating system (RTOS), called Zephyr*. Zephyr is an open-source RTOS designed to operate in microcontrollers with limited memory. The Zephyr RTOS is a software platform that simplifies software development, freeing you up to focus more on algorithms and less on hardware.

The Zephyr RTOS includes driver libraries to:

  • Talk to sensors
  • Keep track of time
  • Send messages to the internet
  • Communicate using radios, like Bluetooth® technology or Wi-Fi
  • Manage power consumption to extend battery life

The Zephyr RTOS is compatible with an array of processors, not just those in Intel® Quark™ microcontrollers. The description in this article additionally applies to the use of the Zephyr RTOS with an array of available microcontrollers from other manufacturers.

Before you begin, download the Zephyr RTOS from the Zephyr Project* website or as part of Intel® System Studio for Microcontrollers.

What is an RTOS?

An RTOS is an operating system with a focus on real-time applications. Zephyr is similar to operating systems you find on desktop computers and laptops. The difference is that an RTOS performs tasks in a predictable, scheduled manner with a focus on getting the most important tasks done on time. In an embedded device, timing is critical. On a desktop, it doesn’t matter much if your computer decides to check for new emails before it starts playing a video. The operating system has a running list of tasks and decides which tasks are most important. Chances are user software is not the highest priority task. An extreme example is an embedded system in a car. It matters if the microcontroller decides to check for email when the microcontroller should be triggering the airbag. An RTOS is an operating system that you control completely.

Why Use an RTOS?

As the Internet of Things expands, formerly unconnected devices are getting “smarter” (e.g., able to send data to the cloud) and increasingly complicated. As complexity grows, software becomes more difficult to manage. Simple devices with single purposes probably don’t need to run an RTOS. But, complicated devices with multiple sensors and radios that need to be smart (connected and responsive) are easier to build and maintain with an RTOS.

The RTOS manages software complexity by encapsulating all the activities the microcontroller needs to perform into individual tasks. Then, the RTOS provides tools to prioritize the tasks, determining which tasks always need to execute on time and which tasks are more flexible. Some applications, like communication over radios to the internet, have strict timing requirements and complicated communication protocols. You can rely on the Zephyr RTOS to make sure that communications happen on time and that your microcontroller responds appropriately without you having to write any software to make it happen.

Writing software with an RTOS is a familiar process for developers coming to microcontrollers from desktop programming. For embedded developers with a background in bare metal firmware on microcontrollers, an RTOS is a powerful new tool. The structure of the RTOS improves encapsulation, isolating different functional pieces of software from each other and provides tools to exchange information between different functional code blocks. This prevents one of the greatest hazards of microcontroller firmware development: memory management. For developers to take advantage of the Zephyr RTOS, it’s important to understand how it works. In the next section, we take a look at the features and capabilities of the Zephyr RTOS.

Zephyr Kernel Fundamentals

What’s a Kernel?

The core functionality of the Zephyr RTOS is the Zephyr kernel. The kernel is software that manages every aspect of hardware and software functionality. The Zephyr kernel is designed to be small, requiring little program and data memory. There are two main components to the Zephyr kernel: a microkernel and nanokernel. Each has different memory requirements and features.

Nanokernel

The nanokernel is the smaller of the two. It’s designed to operate on smaller microcontrollers in devices with less functionality (e.g., a sensor measuring only temperature). It requires as little as two kiloBytes of program memory, which means it can be used in all but the very smallest microcontrollers.

Microkernel

The microkernel is a full-featured kernel for more complex devices: a smartwatch with a display, multiple sensors, and multiple radios. The microkernel is designed for larger microcontrollers with memory between 50 and 900 kiloBytes. Every feature of the nanokernel is available to the microkernel, but not the other way around. With 32 kiloBytes of memory, the Intel® Quark™ microcontroller D2000 is ideal for the nanokernel. A simple microkernel project may fit, but if you don’t need any particular microkernel functions, pick the nanokernel which is better suited to the size of the microcontroller’s  memory. The core functionality of the kernels is the same either way, you just won’t be able to use advanced memory management features that aren’t supported by the nanokernel.

Three Contexts in the Zephyr RTOS

The Zephyr kernel provides three main tools for organizing and controlling software execution: tasks, fibers, and interrupts. In the Zephyr documentation, these tools are called contexts because they provide the context within which software executes and each context has different capabilities. 

Tasks

In the Zephyr RTOS, major software functionality is encapsulated in a task. A task is a piece of software that performs processing that takes a long time or is complicated (e.g., interacting with a server on the internet over Wi-Fi* or analyzing sensor data looking for patterns).

Tasks are assigned priorities with more important activities assigned higher priorities. Lower priority tasks can be interrupted if a higher priority task needs to take action. When a higher priority task interrupts a lower priority task, the lower task’s data and state is stored and then the higher priority task’s data and state is invoked. When the higher priority task finishes its work, the lower priority task is restored and starts again at the point it was interrupted. Tasks take over the processor, perform their function, and then go to sleep to wait until they are needed again. The Zephyr kernel contains a scheduler that determines which task needs to run at any time. You can precisely control when a task executes, based on the passage of time, in response to a hardware signal, or based on the availability of new data to analyze. If it’s important that a task responds quickly to the trigger, it should be assigned a higher priority. Tasks execute in an endless loop, sleeping most of the time waiting to be called to perform their function.

Fibers

Fibers are smaller than tasks, used to perform a simple function or just a portion of the processing. Fibers cannot be interrupted by other tasks or fibers. They should be used for performance critical work that requires immediate action. Fibers are defined and started by tasks. Fibers are prioritized over tasks. Tasks can only operate when no fiber needs to execute, so you need to make sure that fibers don’t monopolize the system. Fibers are prioritized like tasks, but no fiber can interrupt a running fiber. However, fibers always interrupt tasks. Fibers should be used for timing sensitive operations, like communicating with sensors where the timing of responses could cause a problem. Fibers should not be used for processing which takes a long period of time.

Interrupts

Interrupts are the highest priority context. The execution of interrupts takes precedence over fibers and tasks. They allow for the fastest possible response to an event, whether it’s a hardware signal from a safety mechanism or the reception of critical communications. Interrupts are prioritized like tasks and fibers, so that higher priority interrupts can take over the processor from lower priority interrupts. When the higher priority interrupt finishes, the low priority interrupt is re-entered. Interrupts are handled with software functions called Interrupt Service Routines (ISRs). Each interrupt has an ISR that runs whenever the interrupt occurs. Interrupts, as a rule, should be kept as short as possible so that they don’t interfere with the schedule for the rest of the system. Commonly, an ISR sends a message to a task or fiber, passing data, or telling it to run. This keeps the interrupt service routine short and offloads longer processing to parts of the application that can be preempted.

Kernels and Tasks

Nanokernel

The nanokernel, as described above, is the smaller of the two kernels. The nanokernel has only one task, known as the background task, which can only execute when no fiber or interrupt needs to execute. The background task is the main() function. The nanokernel can have zero fibers or as many as your application needs. The nanokernel also has no limits on the number of interrupts up to the limits imposed by the hardware of the microcontroller and the size of your program memory. As we said before, the nanokernel is better suited to the Intel® Quark™ microcontroller D2000 because of the size of its program memory.

Microkernel

The microkernel is more powerful than the nanokernel. It also requires more memory resources. The microkernel supports having more than one task and allows you to group tasks to work together to perform a larger function. Microkernel fibers and interrupts are the same as in the nanokernel. The microkernel has more sophisticated functionality for handling memory, for sending data between tasks and fibers, and for managing the microcontroller’s power consumption.

Advanced Zephyr Kernel Functionality

You can make complete applications with just the features described above, but to get the most out of Zephyr, you should get familiar with some of its advanced features. The Zephyr kernel includes functionality to synchronize operations, to pass data between tasks, and to trigger execution of tasks based on external events. It’s beyond the scope of this introduction to go into detail on all of the features. For more information about the deeper features of the Zephyr RTOS, consult the Zephyr Project* Documentation.

Getting Started with the Zephyr Kernel

First Steps

To get started with Zephyr, you’ll need to download the Zephyr kernel and follow the instructions to set up the Zephyr Development Environment on your computer. With Zephyr installed, it’s a good idea to start with a sample project like the hello world project, which you can find in the samples directory where you installed the Zephyr Project code. Follow the instructions for building an application shown in the Getting Started guide. You’ll then understand how to compile your own application and verify that you’ve gotten everything set up correctly. Take a look at the hello world sample application directory. We’ll go through it to understand what all the files are, their purpose, and how you can modify them to build your own applications.

Nanokernel or Microkernel?

The first thing you’ll notice is that there’s a directory for the nanokernel and another for the microkernel. You can build either and they will look the same from the outside. As mentioned earlier, the nanokernel is the more likely kernel for the Intel® Quark™ microcontroller D2000. Still, it’s important to understand the differences to know which kernel is right for your application. Let’s start with the microkernel.

Microkernel Organization

A microkernel project consists of at least five files:

  1. A configuration file that instructs the kernel to enable features you want to use in your application. Based on the instructions in the configuration file, the kernel will enable hardware functionality and include the appropriate drivers in your project.
  2. A microkernel object definition file that initializes RTOS features, like tasks and interrupts.
  3. An application makefile that informs the Zephyr kernel about which processor you are using, which kernel you are using, whether nano or micro, the name of the project configuration file, and the name of your microkernel object definition file.
  4. Your source code, contained in a subfolder in the project folder.
  5. A makefile that instructs the compiler how to build your source code.

Let’s take a look at each of these files and how you can modify them for your application.

Kernel Configuration File

The Zephyr RTOS is highly configurable with a huge array of options for tailoring the kernel to meet your application’s needs. In the configuration file, commonly named prj.conf, you determine which Zephyr features you’ll use in your application. By only turning on the features that you need, you control the size of the Zephyr libraries included alongside your application code. Every feature that you intend to use in your application needs to be explicitly enabled using definition statements like the one you see in the prj.conf file in the ‘Hello World’ project.

CONFIG_STDOUT_CONSOLE=y

This statement tells Zephyr to include the driver for the standard output console, which you use to send statements to be displayed on your computer. Other configuration options take the same form. Many drivers have an array of options for setting up hardware to operate exactly how you need it to work. The list of available options and configurations is quite extensive. To see all the available configurations and options, see the Zephyr Configuration Options Reference Guide.

Microkernel Object Definition File

The microkernel object definition file contains definitions of tasks and any other kernel objects your project needs. You should define any objects in this file that you want to be available to your entire application, across any number of source files. In the definition of a task, you need to give the task a name, a priority, a size of memory to use, and to assign that task to a group. In the ‘Hello World’ project, the prj.mdef definition file contains the following task definition:

% TASK NAME  PRIO ENTRY STACK GROUPS
% ==================================
    TASK TASKA    7 main  2048 [EXE]

The lines with “%” at the beginning are just comments to clarify the code for the reader. They are not read by the Zephyr compiler. The task name is a Zephyr name for the task, not the name of the function. The priority is what it sounds like. In Zephyr, the lowest priority is 7 and the highest priority is 1. Main is the name of the function that the task will call as its entry point to start running. 2048 is the size of memory, in bytes, allocated to the task. This may seem like a lot, but this memory stores Zephyr data that keeps track of the tasks state and allows for the task to be suspended and restarted. [EXE] is the name of the executable group. In your own project, you can use this same structure to create tasks, as well as defining all of the more advanced features of the microkernel.

The Application Makefile

The application makefile informs the Zephyr kernel which files to use in building your application. It specifies the name of the kernel configuration file, the name of the microkernel configuration file, which processor architecture you are using, and the name of your source code application file. Generally, you won’t need to make changes to the standard Zephyr application makefile.

The Source Code Makefile

The source code makefile is necessary to inform the compiler how to build your source code. Underneath, the Zephyr Development uses an open source compiler to convert your software into machine instructions. The source code makefile tells the compiler which files to include in the process and conveys compiler configuration instructions.

Your Source Code

The source code includes your application, structured in as many files as you need or prefer. To use any Zephyr functionality, a file must include zephyr.h as well as header files for any drivers that you intend to use. Source code files are generally written in C, although the Zephyr compiler allows the use of C++ outside of tasks, fibers, interrupts, and other Zephyr RTOS code. If you look at main.c in the hello world project, you’ll see a standard C file using Zephyr functions.

Nanokernel Differences

The nanokernel configuration is broadly the same as the microkernel configuration, except nanokernel projects don’t have microkernel object definition files. Since the nanokernel only uses one task, it’s automatically generated by Zephyr and will use your main function as the entry point. Fibers and interrupt routines are defined inside your source code. If you compare the microkernel project to the nanokernel project, there isn’t much difference. The prj.mdef file is not necessary, so it’s been removed, and the reference to it in the makefile has also been changed. Otherwise, the nanokernel and microkernel from this perspective are largely the same.

The Zephyr API

The Zephyr API is the toolset you will use to build your application code. It’s full of functions for very quickly prototyping hardware. There are libraries to define and use Zephyr functionality like tasks, fibers, interrupt routines, and timers and drivers for communication buses or to talk to specific pieces of hardware. With the Zephyr API, you can hook up a development board and a sensor shield and be up and running, gathering data in no time. For more information about the Zephyr API, consult the API Documentation.

Next Steps

As you build applications and become familiar with Zephyr, you’ll start to think of writing software in different and new ways. Structuring your software according to tasks and their priority helps to make your software more responsive, more compact, and better organized.

 

Download    Get Started    Code Sample

Resources

How to Install the Neon™ Framework on Ubuntu*

$
0
0

Introduction

The neon™ framework is Intel® Nervana™ Python*-based deep-learning framework designed to use deep neural networks such as AlexNet, VGG, and GoogLeNet.

There are many ways to install the neon framework. Users will need to install additional dependencies and packages in order to install the neon framework successfully.

This article presents a simple step-by-step way to install the neon framework in Ubuntu* 14.04 using the Anaconda* Python distribution. It also guides users through what to do if errors are encountered during the installation process. Additional installation instructions or further troubleshooting can be found here.

Installing the Neon Framework

This section will show how to install neon in a virtual environment. This way neon will be in a self-contained environment with other dependencies and user-preferred Python version different from that of the main environment. Furthermore, Anaconda incorporates the Intel® Math Kernel Library (Intel® MKL), which helps improve the performance of common packages like NumPy*, NumExpr*, SciPy*, and Scikit-learn*.

Follow these steps to install the neon framework:

  1. Install the Anaconda distribution of Python if it is not already there.
    1. Go to the Anaconda download website, select the Download for Linux* option and download either the 2.x or 3.x Python version of Anaconda for 64-bit Linux.
    2. Execute the following command to install Anaconda:
      bash Anaconda2-x.x.x-Linux-x86_64.sh (for python 2.x)
      or
      bash Anaconda3-x.x.x-Linux-x86_64.sh (for python 3.x)


      Note:

      - At the time of this writing, the latest version of Anaconda is 4.3.1. Therefore, the above commands should be written as follow:
      bash Anaconda2-4.3.1-Linux-x86_64.sh (for python 2.x)
      or
      bash Anaconda3-4.3.1-Linux-x86_64.sh (for python 3.x)

      - If Anaconda has been installed, update it to the latest version using the following commands:
      conda update conda
      conda update ananconda
  2. After Anaconda has been installed, create a new conda environment for the neon framework. We’ll name it neon, but you can use any name you want.

    conda create --name neon pip

    conda create --name neon pip

    Figure 1. Create the neon™ framework environment.
    Figure 1. Create the neon™ framework environment.

    Figure 1 shows that the neon framework environment was created successfully.

  3. Activate the new environment using the following command:

    source activate neon

  4. Download the neon framework package using Git*:

    git clone  https://github.com/NervanaSystems/neon.git

    Figure 2. Cloning the neon™ framework from GitHub*.
    Figure 2. Cloning the neon™ framework from GitHub*.

    Figure 2 displays the result when the cloning process is successful.

    Note: if git is not already set up on your computer, you can install it by typing the following:
     

    sudo apt install git

  5. Install the neon framework package using make. Make sure to go to the folder containing the package before using make:

    cd neon && make sysinstall

    Figure 3. Installing the neon™ framework.
    Figure 3. Installing the neon™ framework.

    Figure 3 shows the messages that display when the neon™ framework has been successfully installed.

    If there are errors, the screen will look like that in Figure 4:

     

    Messages that display if errors occur while installing the neon™ framework.Figure 4. Messages that display if errors occur while installing the neon™ framework.

    Figure 5 shows a situation when the neon framework cannot be installed in the system due to missing components.

    Figure 5. Errors messages that display during installation when there are missing components.
    Figure 5. Errors messages that display during installation when there are missing components.

    From Figure 5, it appears that the installation cannot find the file “pillow.” There are two possible reasons: the package containing pillow is corrupted or the package containing pillow has not been installed. The safe way to fix the problem is to uninstall the package and reinstall it. Use the following command to install the package:

    conda install pillow

    To uninstall a package, use the following command:

    conda uninstall <package to uninstall>

    Figure 6. Installing missing components.
    Figure 6. Installing missing components.

    Figure 6 shows how to install missing components for the neon framework.

  6. Deactivate the environment when the installation completes:

    source deactivate neon

    Figure 7. Deactivating the neon™ framework environment.
    Figure 7. Deactivating the neon™ framework environment.

Testing the Neon Framework Installation

To ensure the neon framework is installed correctly, run the MNIST multi-layer perceptron example included in the package. This example is running on a CPU as a CPU was detected and parameters were not changed to run on available GPU. Follow these steps to run the example:

You will see the prompt change to reflect the activated environment. To start the neon framework and run the MNIST multi-layer perceptron example:

  1. Activate the neon framework environment.
    source activate neon
  2. Run the example by issuing the following command:
    examples/mnist_mlp.py

    Figure 8. Running examples under the neon™ framework.
    Figure 8. Running examples under the neon™ framework.

    If the example is running correctly, you will see something similar to that in Figure 8.

  3. Deactivate the environment when you are finished running the example.

    Source deactivate neon

Conclusion

This article described a simple way to install the neon framework on Ubuntu using the Anaconda Python distribution.


Zephyr* Scheduling Basics with the Intel® Quark™ microcontroller D2000

$
0
0

Overview

In this article, you’ll learn about:

  • The fundamentals of scheduling software execution with the Zephyr* Real-time Operating System (RTOS) and the Intel® Quark™ microcontroller D2000.
  • Zephyr software mechanisms called tasks and fibers, which are essential components of all Zephyr applications.
  • Initialization and use of tasks and fibers in your applications.
  • Common problems when getting started with the Zephyr* RTOS and how to avoid them.

The Intel® Quark™ microcontroller D2000 and the Zephyr* RTOS

With the Intel® Quark™ microcontroller D2000, Intel is staking its place at the edge of the Internet of Things (IoT). The Intel® Quark™ microcontroller D2000 was designed from the ground up for IoT applications where low power is important. Small battery-powered sensor devices, gathering data in homes, businesses, factories, and farm fields require ultra-low power consumption electronics. With sleep currents in the single digit microAmps, a sensor device (powered by an Intel® Quark™ microcontroller D2000) transmitting data over a Bluetooth® low energy radio could run for a couple years on a pair of lithium-ion batteries.

The core of the Intel® Quark™ microcontroller D2000 is a Pentium® processor. Low power but still powerful enough for IoT, it’s fully compatible with the x86 instruction set and capable of executing code written for its desktop counterparts. Benefiting from decades of Pentium® processor architecture refinement and software execution optimization, the Intel® Quark™ microcontroller D2000 is a modern microcontroller with a reliable history.

To support software development with this new microcontroller, Intel partnered with the Linux Foundation* to build an open-source real-time operating system (RTOS). Based on source code developed by Wind River*, a wholly owned subsidiary of Intel Corp., the Zephyr* RTOS is built for resource constrained microcontrollers with less than 512kB of system memory. The Zephyr RTOS comes in two sizes and is highly configurable, allowing the user to choose an appropriate feature set and enable only necessary software features to minimize Zephyr’s memory footprint. The Zephyr RTOS includes an Application Programming Interface, or API, with tools and drivers that make working with embedded devices, like sensors and radios, a relatively simple process. If you’re new to working with an RTOS, you’ll find that writing applications with Zephyr will shorten processor bring-up, reduce software issues during hardware validation, and streamline multi-threaded development. If you’re experienced with an RTOS, you’ll find that Zephyr provides all the tools of a world-class RTOS in a fresh package, custom designed to meet the needs of modern designers of IoT.

Zephyr RTOS Fundamentals

Zephyr is a multi-threaded operating system, meaning that Zephyr can effectively perform multiple operations at the same time. Functional blocks of code are executed in turn, according to priorities that you assign to tasks and fibers. Separate blocks of code aren’t actually running simultaneously. Since the Intel® Quark™ microcontroller D2000 has only one processor core, it executes functions one at time, handling higher priority code first and executing lower priority code when higher priority code is idle. You decide which functions are most important and Zephyr will prioritize their execution to meet critical timing requirements.

In Zephyr, functional blocks of code can be executed in your choice of three execution contexts: task, fiber, or interrupt. A task is for larger pieces of code that take longer to execute and aren’t as time sensitive. A fiber is for smaller operations with stricter timing requirements like hardware drivers. An interrupt is for the smallest operations which are the most time critical, like responding to a hardware or software event. Tasks are the lowest priority and can be preempted whenever a higher priority task, a fiber, or an interrupt needs to execute. Fibers always interrupt a task when they need to execute. They can only be preempted by interrupts, not by tasks or other fibers, even higher priority ones. As you can see, interrupts are the highest priority and they always interrupt a task or fiber when they need to run. This article covers tasks and fibers but not interrupts. For more information on interrupts, consult the Zephyr Project* Documentation.

Kernels

The core of the Zephyr RTOS is called the kernel, which contains the software system for scheduling code execution. The kernel also contains software subsystems like device drivers and networking software. The Zephyr kernel is comprised of the nanokernel and the microkernel.

Nanokernel

The nanokernel is the lighter of the two kernels, with a reduced feature set to achieve a smaller memory footprint. It’s designed for microcontrollers with less than 50kB of system memory, like the Intel® Quark™ microcontroller D2000. The nanokernel is better suited to handle simpler applications, like reading a small number of sensors and communicating over a single radio.

The nanokernel is only allowed to have a single task, usually the main function. Nanokernel applications are not restricted in the number of fibers they can use, up to the limits of their memory. For most applications with the Intel® Quark™ microcontroller D2000, the nanokernel should be the best kernel option. While it’s possible to compile a microkernel application for the Intel® Quark™ microcontroller D2000, you’ll have less room for your application code. Only use the microkernel with the Intel® Quark™ microcontroller D2000 if you absolutely need microkernel features only (multiple tasks, sophisticated memory management tools, etc.).

Microkernel

Everything the nanokernel can do, the microkernel can do and more. The microkernel is the full-featured version of the Zephyr RTOS. Geared toward complex applications, the microkernel can coordinate multiple tasks, like handling reading sensors and performing data analysis while communicating with the cloud over multiple radio channels. The microkernel can run more than one task as well as an unlimited number of fibers. The microkernel has more available features for managing data flow and memory and synchronizing execution. 

Scheduling with Zephyr

The Tick Timer

Everything in the Zephyr RTOS marches to the timing of the Zephyr tick timer. The tick timer is derived from a 64-bit system clock in the Intel® Quark™ microcontroller D2000 which takes its count from a 32-bit hardware timer. Zephyr’s tick timer defines the granularity of timing in your application. The default step size of the tick timer is 10 milliseconds. The period of the tick timer determines the minimum resolution you can achieve with software timers in your application. It also determines the shortest interval in which the Zephyr RTOS will change between equally prioritized tasks and fibers. A longer tick timer period can potentially make your code less responsive. On the other hand, a shorter tick timer period increases the operating system overhead because changing tasks takes time and resources. You can reduce the tick timer period if you need finer timing resolution or increase it if you want to reduce processor activity. If your application doesn’t require it, the tick timer is best left alone.

Tasks

Now, let’s take a look at how to configure and use tasks and fibers to build your application. You need to be aware of differences between the nanokernel and the microkernel. There are differences in how each handles tasks and fibers. First, let’s look at tasks in each of the kernels.

Nanokernel Tasks

In the nanokernel, you’re only allowed one task. Zephyr requires at least one task to operate and uses your main() function as that one task. Zephyr refers to the main() task in a nanokernel application as the background task. As the name suggests, your main() function will only execute when no fiber or Interrupt Service Routine (ISR) needs to run. This fact has some implications for how you write your code and how you structure your application.

Unlike a main() function in a non-RTOS application, the Zephyr background task doesn’t end when execution reaches the end of the function; the task runs again in a loop. To avoid running initialization code again, your main function should have an endless loop at its end.

The next consideration is to place code in the main task that is not time critical. Since any other application code can interrupt it at any time, your main task may not always execute with consistent timing. It’s also possible that you could construct an application where the main task doesn’t ever get to execute at all. If fibers and interrupts monopolize the processor by taking too long to execute or by containing prolonged delays, you can cause what’s referred to as task starvation. Chances are that simple applications won’t encounter task starvation. It’s just something that you need to be aware of now that you’re working with an operating system.

Your main() function/task with Zephyr’s nanokernel should look like this:

void main(void){

/*Hardware initialization here*/

while(1){
		/*Endless loop*/
	}
}

If you want the main task to pause operation, you can use timers and wait for them to expire, or you can use the task_sleep() function to idle the task for a specified length of time. Using the tick timer with the task_sleep() function, you tell the task to sleep for a certain number of ticks. The task will then sleep for a length of time of the number of ticks times the tick timer period. For example, if you would like to put the main() task to sleep for 10 timer ticks, or 100 milliseconds with a 10 millisecond tick timer, use this:

task_sleep(10);

Using this basic timing functionality, you can create software events which occur at regular intervals.

Microkernel Tasks

With the microkernel, as with the nanokernel, your main() function is a task. However, unlike the nanokernel, the microkernel can handle multiple tasks. When you design your software system architecture, tasks should contain longer and more complex code functions that like the main task are too lengthy to be performed by a fiber.

Microkernel tasks are very different from nanokernel tasks in terms of how they are initialized and how they work. Tasks in the microkernel are given priorities to determine which is the most important. Priorities for microkernel tasks can range from 0, the highest priority, down to a configurable minimum priority which defaults to 15. Your minimum task priority should be one less than the lowest priority which is reserved for the microkernel’s idle task which runs when nothing else needs to execute. The Zephyr RTOS will run whichever task has the highest priority first. If two tasks have the same priority, it will run the one that has been waiting the longest.

Like the main() task in the nanokernel, tasks normally run forever in a loop. It’s your responsibility to set the priorities, determining which code needs to always execute on time and which code can handle more interruptions and longer delays.

Microkernel tasks require: a defined memory region to store the task’s stack, a function to be invoked when the task starts executing, a priority, and what’s called a task group. Also, the microkernel requires an extra file, called an MDEF file, in your project directory. The MDEF file is a text file in which you’ll declare all your microkernel objects, including tasks.

Declaring a microkernel task

Microkernel tasks are declared in the MDEF file with all the necessary information conveyed in this order: name, priority, function, stack memory size, and task group. In the MDEF file, comment lines, starting with the % symbol, are not interpreted by the Zephyr build system. Task declarations start with the keyword TASK. Tasks that should execute immediately when the application starts should be assigned to the EXE group. Using task groups, you can start and stop a group of tasks together. Tasks that should not execute immediately, like ones which handle sensors that may require a start up delay, can be assigned to a different task group, or the group can be left empty as in the example below.

% TASK NAME           PRIO  ENTRY          STACK   GROUPS
% ===================================================================
  TASK MAIN_TASK        6   main                    1024   [EXE]
  TASK SENSOR_TASK 2   sensor                 400     []

In this example, two tasks are defined. The “MAIN_TASK” is defined with a priority of 6, the main function as its entry point, a stack memory region of 1024 bytes, and assigned to the EXE group to start immediately. The “SENSOR_TASK” is defined with a higher priority of 2, a function called sensor() as its entry point, a stack region of 400 bytes, and no assigned task group.

Starting a Microkernel Task

Tasks in the EXE group will start automatically but tasks that don’t start right away need to be started by another task using the task_start() function. To start a task, you only need to know the task’s name as it appears in the MDEF file:

task_start(SENSOR_TASK);

Fibers

Unlike tasks, fibers are handled the same with the nanokernel and the microkernel. Fibers are intended to be used for shorter performance-critical pieces of code. Fibers cannot be interrupted by another task or another fiber, so execution timing is more consistent and reliable. You should use fibers for device driver code or for communications requiring precise timing. Interrupt service routines can interrupt a fiber, so you should still account for this in writing your code.

Fibers are scheduled for execution by the RTOS based upon priority. Fiber priorities range from 0, which is the highest priority, down to 232-1. If two fibers have the same priority, the fiber which has been idle the longest is executed first. If no fibers need to execute, then the highest priority task that needs to run is scheduled for execution. Of course, a fiber can interrupt any task if it needs to run.

Initializing a Fiber

Fibers are declared in your source code and then initialized and started from a task or another fiber. The process is slightly more complex than that for a task. Fibers require that you declare a stack memory region for storing fiber variables and context data that is used when the task is idled and restarted. You also need to create a function which will be used as the function entry point where the fiber starts execution. The function can take up to two arguments although it’s not necessary that you use them. The function arguments can be used to supply initialization information for the fiber. You also need to specify the fiber’s priority and have the option of passing some options to the fiber. The options don’t apply to the Intel® Quark™ microcontroller D2000. To declare the stack memory region, do something like this:

#define STACKSIZE 2000
char __stack fiberStack[STACKSIZE];

This declaration uses the C preprocessor __stack identifier to declare a memory array of size set by the STACKSIZE constant. The function that will serve as the fiber entry point requires no special declaration, using a prototype like this:

void fiberEntry(int arg1, int arg2);

If your application doesn’t have a need for the function arguments, you are free to use the function prototype:

Void fiberEntry(void);

For your application, you should change “fiberEntry” to something appropriate and meaningful.

With a stack memory region and an entry point function, you have everything you’ll need to use the task_fiber_start function to enable the fiber. The fiber start function takes as arguments: a pointer to the stack memory, an integer defining the size of the stack, the name of the entry function, the two integer arguments, the priority as an integer, and any fiber options. In practice, it looks like this:

task_fiber_start(fiberStack, STACKSIZE , fiberEntry, arg1, arg2, priority, options);

In this case, arg1, arg2, priority, and options are integer variables. Since fibers take priority over tasks, if a task executes this function, the fiber will start to execute immediately after the processor finishes executing the task_fiber_start function. The task will be idled and the RTOS will switch over to the fiber. If immediately starting the fiber is undesirable, then you can use the task_fiber_delayed_start() function instead. This function takes one extra argument which is the number of ticks that the fiber should be delayed in starting up. The other arguments are the same and in the same order. To delay the startup of a fiber by 10 ticks, you would change the above function call to this one:

task_fiber_delayed_start(fiberStack, STACKSIZE , fiberEntry, arg1, arg2, priority, options, 10);

If you are starting a fiber from another fiber, then you need to use a different function designed to be used from fibers. The form is the same; just the name is different. Function calls from fibers substitute the word “fiber” for “task” in the function name. Use this function to start a fiber immediately:

fiber_fiber_start(fiberStack, STACKSIZE , fiberEntry, arg1, arg2, priority, options);

And this function to start a fiber with a delay:

fiber_fiber_delayed_start(fiberStack, STACKSIZE , fiberEntry, arg1, arg2, priority, options, 10);

Idling a Fiber

Like a task, a fiber normally executes forever once it’s started. Unlike a task, a fiber cannot be interrupted except by an interrupt. Other fibers, including higher priority fibers cannot interrupt an actively processing fiber. Any fiber that monopolizes the processor with long processing times may cause delays with other fibers, including higher priority ones. In this case, it may be necessary for a task to deliberately pause execution to allow time for other fibers and tasks to execute. There are two Zephyr functions for this purpose, each with slightly different behaviors. The fiber_yield() function will idle a fiber so that higher or equal priority fibers have an opportunity to execute. As an argument, the fiber_yield() function requires the number of ticks for which the task will yield control of the processor. To yield for 10 timer ticks, use this:

fiber_yield(10);

For a more general relinquishment of processor control, you should use fiber_sleep(), which will surrender control of the processor without condition for a specified number of ticks. Unlike the yield function, the sleep function allows tasks and lower priority fibers to execute. To idle a fiber for 10 ticks, use this:

fiber_sleep(10);

The Microkernel Server Fiber

Fibers play a larger role in nanokernel applications since the nanokernel doesn’t allow multiple tasks. In microkernel applications, fiber usage should be reserved for the highest priority activities when being preempted could compromise performance. Nanokernel applications should use fibers for driver interactions and time sensitive processing.

The microkernel automatically runs one task, the microkernel server fiber, which handles the scheduling of all microkernel fibers, determining which fiber needs to execute first. The microkernel server fiber defaults to the highest priority, 0. You can change the microkernel server fiber to a lower one if you have high priority critical code that can’t tolerate any delay, like time sensitive device drivers. In general, it won’t be necessary to change the microkernel server priority, but if you’re curious, consult the Zephyr Microkernel Fiber Documentation.

Next Steps

With basic knowledge of scheduling with the Zephyr RTOS and an understanding of fibers and tasks you can now make quality applications with Zephyr. For further information, check out the advanced RTOS mechanisms in the Zephyr Project Documentation.

Resources

Intel® XDK FAQs - General

$
0
0

How can I get started with Intel XDK?

There are plenty of videos and articles that you can go through here to get started. You could also start with some of our demo apps. It may also help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, which will help you understand some of the differences between developing for a traditional server-based environment and developing for the Intel XDK hybrid Cordova app environment.

Having prior understanding of how to program using HTML, CSS and JavaScript* is crucial to using the Intel XDK. The Intel XDK is primarily a tool for visualizing, debugging and building an app package for distribution.

You can do the following to access our demo apps:

  • Select Project tab
  • Select "Start a New Project"
  • Select "Samples and Demos"
  • Create a new project from a demo

If you have specific questions following that, please post it to our forums.

Do I need to use the Intel XDK to complete the HTML5 from W3C Xseries Course?

It is not required that you use the Intel XDK to complete the HTML5 from W3C Xseries course. There is nothing in the course that requires the Intel XDK. 

All that is needed to complete the course is the free Brackets HTML5 editor. Whenever the course refers to using the "Live Layout" feature of the Intel XDK, use the "Live Preview" feature in Brackets, instead. The Intel XDK "Live Layout" feature is directly derived from, and is nearly identical to, the Brackets "Live Layout" feature. 

For additional help, see this Intel XDK forum post and this Intel XDK forum thread.

Error contacting remote build servers.

If you consistently see this error message while using the Build tab, and you are logged into the Intel XDK, it is likely due to using an obsolete and unsupported version of the Intel XDK. Check your Intel XDK version number (four digit number in the upper-right corner of the Intel XDK) and review the Intel XDK Release Notes for information regarding which versions are currently supported.

You must upgrade to a new version of the Intel XDK to resolve this issue.

NOTICE: Internet connection and login are required.

If you have successfully logged into the Intel XDK, but you are seeing the error message in the image below, when using the Build or Test tab, it may be due to an obsolete and unsupported version of the Intel XDK. Please check your Intel XDK version number (four digit number in the upper-right corner of the Intel XDK) and review the Intel XDK Release Notes for information regarding which versions are currently supported.

Otherwise, please review this FAQ for help creating an Intel XDK login.

I cannot login to the Intel XDK, how do I create a userid and password to use the Intel XDK?

If you have downloaded and installed the Intel XDK but are having trouble creating a userid and password, you can create your login credentials outside of the Intel XDK. To do this, go to the Intel Developer Zone and push the "Join Today" button. After you have created your Intel Developer Zone login you can return to the Intel XDK and use that userid and password to login to the Intel XDK. This same userid and password can also be used to login to the Intel XDK forum.

I cannot login to App Preview with my Intel XDK password.

On some devices you may have trouble entering your Intel XDK login password directly on the device in the App Preview login screen. In particular, sometimes you may have trouble with the first one or two letters getting lost when entering your password.

Try the following if you are having such difficulties:

  • Reset your password, using the Intel login page, to something short and simple. (If you do not have an Intel XDK userid, goto the Intel XDK registration page to create one.)

  • Confirm that this new short and simple password works with the XDK (logout and login to the Intel XDK).

  • Confirm that this new password works with the Intel Developer Zone login.

  • Make sure you have the most recent version of Intel App Preview installed on your devices. Go to the store on each device to confirm you have the most recent copy of App Preview installed.

  • Try logging into Intel App Preview on each device with this short and simple password. Check the "show password" box so you can see your password as you type it.

If the above works, it confirms that you can log into your Intel XDK account from App Preview (because App Preview and the Intel XDK use the same technique to authenticate your login). Once the above works, you can reset your password to something else if you do not like the short and simple password you used for the test.

If you are having trouble logging into any pages on the Intel web site (including the Intel XDK forum), please see the Intel Sign In FAQ for suggestions and contact info. That login system is the backend for the Intel XDK login screen.

How can I change the email address associated with my Intel XDK login?

Login to the Intel Developer Zone with your Intel XDK account userid and password and then locate your "account dashboard." Click the "pencil icon" next to your name to open the "Personal Profile" section of your account, where you can edit your "Name & Contact Info," including the email address associated with your account, under the "Private" section of your profile.

Inactive account/login issue/problem updating an APK in store, How do I request account transfer?

As of June 26, 2015 we migrated all of Intel XDK accounts to the more secure intel.com login system (the same login system you use to access this forum).

We have migrated nearly all active users to the new login system. Unfortunately, there are a few active user accounts that we could not automatically migrate to intel.com, primarily because the intel.com login system does not allow the use of some characters in userids that were allowed in the old login system.

If you have not used the Intel XDK for a long time prior to June 2015, your account may not have been automatically migrated. If you own an "inactive" account it will have to be manually migrated -- please try logging into the Intel XDK with your old userid and password, to determine if it no longer works. If you find that you cannot login to your existing Intel XDK account, and still need access to your old account, please send a message to html5tools@intel.com and include your userid and the email address associated with that userid, so we can guide you through the steps required to reactivate your old account.

Alternatively, you can create a new Intel XDK account. If you have submitted an app to the Android store from your old account you will need access to that old account to retrieve the Android signing certificates in order to upgrade that app on the Android store; in that case, send an email to html5tools@intel.com with your old account username and email and new account information.

I lost my project, how do I download my project source code from the Intel XDK servers?

We do not store your projects on our servers for any significant period of time, just long enough to perform a build or send for testing on App Preview. Your source code is located inside of the APK and IPA files you built. You will have to recreate the project settings, but you have all of the source if you have the APK (or IPA or Windows build). Rename the APK you have to a ZIP, for example from "my-app.apk" to "my-app.apk.zip" and then unzip that file using your favorite archive tool. For example, the contents of an APK based on the "hello-cordova" sample:

NOTE: the cordova-js-src folder was added by Cordova, it is not part of the original source for this sample project. Likewise, the cordova.js and the cordova_plugins.js files were added by Cordova. The remaining files and folders within the www folder were directly copied from the original project's www folder.

You can start a new project using the blank template and copy the source code from inside the APK's www folder into that project's www folder. You can also see which plugins were included in the APK by inspecting the plugins folder or inspecting the cordova_plugins.js file that was added to the APK. At the very end of the cordova_plugins.js file is a list of plugins that were added and the specific versions of those plugins. For example, from the APK above, that is based on the "hello-cordova" sample, the last lines from the cordova_plugins.js file:

module.exports.metadata =
// TOP OF METADATA
{"cordova-plugin-crosswalk-webview": "1.5.0","cordova-plugin-device-orientation": "1.0.3","cordova-plugin-device": "1.1.2","cordova-plugin-compat": "1.1.0","cordova-plugin-geolocation": "2.2.0","cordova-plugin-inappbrowser": "1.4.0","cordova-plugin-splashscreen": "3.2.2","cordova-plugin-dialogs": "1.2.1","cordova-plugin-statusbar": "2.1.3","cordova-plugin-file": "4.2.0","cordova-plugin-media": "2.3.0","cordova-plugin-device-motion": "1.2.1","cordova-plugin-vibration": "2.1.1","cordova-plugin-whitelist": "1.2.2"
};

NOTE: in the list above, the cordova-plugin-whitelist and cordova-plugin-crosswalk-webview plugins were added automatically by the Intel XDK and, likewise, will be added automatically by the Intel XDK Cordova export tool, so you do not need to add these two plugins to your rebuilt project.

If you were using Crosswalk, you may see a xwalk-command-line file in the APK, the contents of that file are the Crosswalk initialization commands that were provided, for example, from this same sample app:

xwalk --ignore-gpu-blacklist --disable-pull-to-refresh-effect 

Beyond that, you can inspect the AndroidManifest.xml file to find a few other things, like the version numbers. For example, if you have Android Studio installed on your system, you can use the aapt command to inspect the contents of your APK. The most useful being the version codes and the package name, as shown below:

$ aapt list -a my-app.apk | fgrep -i version
    ...lines deleted for clarity...
    A: android:versionCode(0x0101021b)=(type 0x10)0x1c
    A: android:versionName(0x0101021c)="16.5.16" (Raw: "16.5.16")
    A: platformBuildVersionCode=(type 0x10)0x17 (Raw: "23")
    A: platformBuildVersionName="6.0-2704002" (Raw: "6.0-2704002")
      A: android:minSdkVersion(0x0101020c)=(type 0x10)0xe
      A: android:targetSdkVersion(0x01010270)=(type 0x10)0x15

$ aapt list -a my-app.apk | fgrep package
Package Group 0 id=0x7f packageCount=1 name=xdk.intel.hellocordova
    A: package="xdk.intel.hellocordova" (Raw: "xdk.intel.hellocordova")

How do I convert my web app or web site into a mobile app?

The Intel XDK creates Cordova mobile apps (aka PhoneGap apps). Cordova web apps are driven by HTML5 code (HTML, CSS and JavaScript). There is no web server in the mobile device to "serve" the HTML pages in your Cordova web app, the main program resources required by your Cordova web app are file-based, meaning all of your web app resources are located within the mobile app package and reside on the mobile device. Your app may also require resources from a server. In that case, you will need to connect with that server using AJAX or similar techniques, usually via a collection of RESTful APIs provided by that server. However, your app is not integrated into that server, the two entities are independent and separate.

Many web developers believe they should be able to include PHP or Java code or other "server-based" code as an integral part of their Cordova app, just as they do in a "dynamic web app." This technique does not work in a Cordova web app, because your app does not reside on a server, there is no "backend"; your Cordova web app is a "front-end" HTML5 web app that runs independent of any servers. See the following articles for more information on how to move from writing "multi-page dynamic web apps" to "single-page Cordova web apps":

Can I use an external editor for development in Intel® XDK?

Yes, you can open your files and edit them in your favorite editor. However, note that you must use Brackets* to use the "Live Layout Editing" feature. Also, if you are using App Designer (the UI layout tool in Intel XDK) it will make many automatic changes to your index.html file, so it is best not to edit that file externally at the same time you have App Designer open.

Some popular editors among our users include:

  • Sublime Text* (Refer to this article for information on the Intel XDK plugin for Sublime Text*)
  • Notepad++* for a lighweight editor
  • Jetbrains* editors (Webstorm*)
  • Vim* the editor

Where are the global-settings.xdk and xdk.log files?

global-settings.xdk contains information about all your projects in the Intel XDK, along with many of the settings related to panels under each tab (Emulate, Debug etc). For example, you can set the emulator to auto-refresh or no-auto-refresh. Modify this file at your own risk and always keep a backup of the original!

The xdk.log file contains logged data generated by the Intel XDK while it is running. Sometimes technical support will ask for a copy of this file in order to get additional information to engineering regarding problems you may be having with the Intel XDK. 

Both files are located in the same directory on your development system. Unfortunately, the precise location of these files varies with the specific version of the Intel XDK. You can find the global-settings.xdk and the xdk.log using the following command-line searches:

  • From a Windows cmd.exe session:
    > cd /
    > dir /s global-settings.xdk
     
  • From a Mac and Linux bash or terminal session:
    $ sudo find / -name global-settings.xdk

When do I use the intelxdk.js, xhr.js and cordova.js libraries?

The intelxdk.js and xhr.js libraries were only required for use with the Intel XDK legacy build tiles (which have been retired). The cordova.js library is needed for all Cordova builds. When building with the Cordova tiles, any references to intelxdk.js and xhr.js libraries in your index.html file are ignored.

How do I get my Android (and Crosswalk) keystore file?

New with release 3088 of the Intel XDK, you may now download your build certificates (aka keystore) using the new certificate manager that is built into the Intel XDK. Please read the initial paragraphs of Managing Certificates for your Intel XDK Account and the section titled "Convert a Legacy Android Certificate" in that document, for details regarding how to do this.

It may also help to review this short, quick overview video (there is no audio) that shows how you convert your existing "legacy" certificates to the "new" format that allows you to directly manage your certificates using the certificate management tool that is built into the Intel XDK. This conversion process is done only once.

If the above fails, please send an email to html5tools@intel.com requesting help. It is important that you send that email from the email address associated with your Intel XDK account.

How do I rename my project that is a duplicate of an existing project?

See this FAQ: How do I make a copy of an existing Intel XDK project?

How do I recover when the Intel XDK hangs or won't start?

  • If you are running Intel XDK on Windows* you must use Windows* 7 or higher. The Intel XDK will not run reliably on earlier versions.
  • Delete the "project-name.xdk" file from the project directory that Intel XDK is trying to open when it starts (it will try to open the project that was open during your last session), then try starting Intel XDK. You will have to "import" your project into Intel XDK again. Importing merely creates the "project-name.xdk" file in your project directory and adds that project to the "global-settings.xdk" file.
  • Rename the project directory Intel XDK is trying to open when it starts. Create a new project based on one of the demo apps. Test Intel XDK using that demo app. If everything works, restart Intel XDK and try it again. If it still works, rename your problem project folder back to its original name and open Intel XDK again (it should now open the sample project you previously opened). You may have to re-select your problem project (Intel XDK should have forgotten that project during the previous session).
  • Clear Intel XDK's program cache directories and files.

    On a Windows machine this can be done using the following on a standard command prompt (administrator is not required):

    > cd %AppData%\..\Local\XDK
    > del *.* /s/q

    To locate the "XDK cache" directory on [OS X*] and [Linux*] systems, do the following:

    $ sudo find / -name global-settings.xdk
    $ cd <dir found above>
    $ sudo rm -rf *

    You might want to save a copy of the "global-settings.xdk" file before you delete that cache directory and copy it back before you restart Intel XDK. Doing so will save you the effort of rebuilding your list of projects. Please refer to this question for information on how to locate the global-settings.xdk file.
  • If you save the "global-settings.xdk" file and restored it in the step above and you're still having hang troubles, try deleting the directories and files above, along with the "global-settings.xdk" file and try it again.
  • Do not store your project directories on a network share (the Intel XDK has issues with network shares that have not been resolved). This includes folders shared between a Virtual machine (VM) guest and its host machine (for example, if you are running Windows* in a VM running on a Mac* host). 
  • Some people have issues using the Intel XDK behind a corporate network proxy or firewall. To check for this issue, try running the Intel XDK from your home network where, presumably, you have a simple NAT router and no proxy or firewall. If things work correctly there then your corporate firewall or proxy may be the source of the problem.
  • Issues with Intel XDK account logins can also cause Intel XDK to hang. To confirm that your login is working correctly, go to the Intel login page and confirm that you can login with your Intel XDK account username and password.
  • If you are experiencing login issues, please send an email to html5tools@intel.com from the email address registered to your login account, describing the nature of your account problem and any other details you believe may be relevant.

If you can reliably reproduce the problem, please post a copy of the "xdk.log" file that is stored in the same directory as the "global-settings.xdk" file to the Intel XDK forum. Please ATTACH the xdk.log file to your post using the "Attach Files to Post" link below the forum edit window.

Is Intel XDK an open source project? How can I contribute to the Intel XDK community?

No, It is not an open source project. However, it utilizes many open source components that are then assembled into Intel XDK. While you cannot contribute directly to the Intel XDK integration effort, you can contribute to the many open source components that make up Intel XDK.

The following open source components are the major elements that are being used by Intel XDK:

  • Node-Webkit
  • Chromium
  • Ripple* emulator
  • Brackets* editor
  • Weinre* remote debugger
  • Crosswalk*
  • Cordova*
  • App Framework*

How do I configure Intel XDK to use 9 patch png for Android* apps splash screen?

Intel XDK does support the use of 9 patch png for Android* apps splash screen. You can read up more at https://software.intel.com/en-us/xdk/articles/android-splash-screens-using-nine-patch-png on how to create a 9 patch png image and link to an Intel XDK sample using 9 patch png images.

How do I stop AVG from popping up the "General Behavioral Detection" window when Intel XDK is launched?

You can try adding nw.exe as the app that needs an exception in AVG.

What do I specify for "App ID" in Intel XDK under Build Settings?

Your app ID uniquely identifies your app. For example, it can be used to identify your app within Apple’s application services allowing you to use things like in-app purchasing and push notifications.

Here are some useful articles on how to create an App ID:

Is it possible to modify the Android Manifest or iOS plist file with the Intel XDK?

You cannot modify the AndroidManifest.xml file directly with our build system, as it only exists in the cloud. However, you may do so by creating a dummy plugin that only contains a plugin.xml file containing directives that can be used to add lines to the AndroidManifest.xml file during the build process. In essence, you add lines to the AndroidManifest.xml file via a local plugin.xml file. Here is an example of a plugin that does just that:

<?xml version="1.0" encoding="UTF-8"?><plugin xmlns="http://apache.org/cordova/ns/plugins/1.0" id="my-custom-intents-plugin" version="1.0.0"><name>My Custom Intents Plugin</name><description>Add Intents to the AndroidManifest.xml</description><license>MIT</license><engines><engine name="cordova" version=">=3.0.0" /></engines><!-- android --><platform name="android"><config-file target="AndroidManifest.xml" parent="/manifest/application"><activity android:configChanges="orientation|keyboardHidden|keyboard|screenSize|locale" android:label="@string/app_name" android:launchMode="singleTop" android:name="testa" android:theme="@android:style/Theme.Black.NoTitleBar"><intent-filter><action android:name="android.intent.action.SEND" /><category android:name="android.intent.category.DEFAULT" /><data android:mimeType="*/*" /></intent-filter></activity></config-file></platform></plugin>

You can inspect the AndroidManifest.xml created in an APK, using apktool with the following command line:

$ apktool d my-app.apk
$ cd my-app
$ more AndroidManifest.xml

This technique exploits the config-file element that is described in the Cordova Plugin Specification docs and can also be used to add lines to iOS plist files. See the Cordova plugin documentation link for additional details.

Here is an example of such a plugin for modifying the iOS plist file, specifically for adding a BIS key to the plist file:

<?xml version="1.0" encoding="UTF-8"?><plugin
    xmlns="http://apache.org/cordova/ns/plugins/1.0"
    id="my-custom-bis-plugin"
    version="0.0.2"><name>My Custom BIS Plugin</name><description>Add BIS info to iOS plist file.</description><license>BSD-3</license><preference name="BIS_KEY" /><engines><engine name="cordova" version=">=3.0.0" /></engines><!-- ios --><platform name="ios"><config-file target="*-Info.plist" parent="CFBundleURLTypes"><array><dict><key>ITSAppUsesNonExemptEncryption</key><true/><key>ITSEncryptionExportComplianceCode</key><string>$BIS_KEY</string></dict></array></config-file></platform></plugin>

And see this forum thread > https://software.intel.com/en-us/forums/intel-xdk/topic/680309< for an example of how to customize the OneSignal plugin's notification sound, in an Android app, by way of using a simple custom Cordova plugin. The same technique can be applied to adding custom icons and other assets to your project.

How can I share my Intel XDK app build?

You can send a link to your project via an email invite from your project settings page. However, a login to your account is required to access the file behind the link. Alternatively, you can download the build from the build page, onto your workstation, and push that built image to some location from which you can send a link to that image.

Why does my iOS build fail when I am able to test it successfully on a device and the emulator?

Common reasons include:

  • Your App ID specified in the project settings do not match the one you specified in Apple's developer portal.
  • The provisioning profile does not match the cert you uploaded. Double check with Apple's developer site that you are using the correct and current distribution cert and that the provisioning profile is still active. Download the provisioning profile again and add it to your project to confirm.
  • In Project Build Settings, your App Name is invalid. It should be modified to include only alpha, space and numbers.

How do I add multiple domains in Domain Access?

Here is the primary doc source for that feature.

If you need to insert multiple domain references, then you will need to add the extra references in the intelxdk.config.additions.xml file. This StackOverflow entry provides a basic idea and you can see the intelxdk.config.*.xml files that are automatically generated with each build for the <access origin="xxx" /> line that is generated based on what you provide in the "Domain Access" field of the "Build Settings" panel on the Project Tab.

How do I build more than one app using the same Apple developer account?

On Apple developer, create a distribution certificate using the "iOS* Certificate Signing Request" key downloaded from Intel XDK Build tab only for the first app. For subsequent apps, reuse the same certificate and import this certificate into the Build tab like you usually would.

How do I include search and spotlight icons as part of my app?

Please refer to this article in the Intel XDK documentation. Create anintelxdk.config.additions.xml file in your top level directory (same location as the otherintelxdk.*.config.xml files) and add the following lines for supporting icons in Settings and other areas in iOS*.

<!-- Spotlight Icon --><icon platform="ios" src="res/ios/icon-40.png" width="40" height="40" /><icon platform="ios" src="res/ios/icon-40@2x.png" width="80" height="80" /><icon platform="ios" src="res/ios/icon-40@3x.png" width="120" height="120" /><!-- iPhone Spotlight and Settings Icon --><icon platform="ios" src="res/ios/icon-small.png" width="29" height="29" /><icon platform="ios" src="res/ios/icon-small@2x.png" width="58" height="58" /><icon platform="ios" src="res/ios/icon-small@3x.png" width="87" height="87" /><!-- iPad Spotlight and Settings Icon --><icon platform="ios" src="res/ios/icon-50.png" width="50" height="50" /><icon platform="ios" src="res/ios/icon-50@2x.png" width="100" height="100" />

For more information related to these configurations, visit http://cordova.apache.org/docs/en/3.5.0/config_ref_images.md.html#Icons%20and%20Splash%20Screens.

For accurate information related to iOS icon sizes, visit https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/MobileHIG/IconMatrix.html

NOTE: The iPhone 6 icons will only be available if iOS* 7 or 8 is the target.

Cordova iOS* 8 support JIRA tracker: https://issues.apache.org/jira/browse/CB-7043

Does Intel XDK support Modbus TCP communication?

No, since Modbus is a specialized protocol, you need to write either some JavaScript* or native code (in the form of a plugin) to handle the Modbus transactions and protocol.

How do I sign an Android* app using an existing keystore?

New with release 3088 of the Intel XDK, you may now import your existing keystore into Intel XDK using the new certificate manager that is built into the Intel XDK. Please read the initial paragraphs of Managing Certificates for your Intel XDK Account and the section titled "Import an Android Certificate Keystore" in that document, for details regarding how to do this.

If the above fails, please send an email to html5tools@intel.com requesting help. It is important that you send that email from the email address associated with your Intel XDK account.

How do I build separately for different Android* versions?

Under the Projects Panel, you can select the Target Android* version under the Build Settings collapsible panel. You can change this value and build your application multiple times to create numerous versions of your application that are targeted for multiple versions of Android*.

How do I display the 'Build App Now' button if my display language is not English?

If your display language is not English and the 'Build App Now' button is proving to be troublesome, you may change your display language to English which can be downloaded by a Windows* update. Once you have installed the English language, proceed to Control Panel > Clock, Language and Region > Region and Language > Change Display Language.

How do I update my Intel XDK version?

When an Intel XDK update is available, an Update Version dialog box lets you download the update. After the download completes, a similar dialog lets you install it. If you did not download or install an update when prompted (or on older versions), click the package icon next to the orange (?) icon in the upper-right to download or install the update. The installation removes the previous Intel XDK version.

How do I import my existing HTML5 app into the Intel XDK?

If your project contains an Intel XDK project file (<project-name>.xdk) you should use the "Open an Intel XDK Project" option located at the bottom of the Projects List on the Projects tab (lower left of the screen, round green "eject" icon, on the Projects tab). This would be the case if you copied an existing Intel XDK project from another system or used a tool that exported a complete Intel XDK project.

If your project does not contain an Intel XDK project file (<project-name>.xdk) you must "import" your code into a new Intel XDK project. To import your project, use the "Start a New Project" option located at the bottom of the Projects List on the Projects tab (lower left of the screen, round blue "plus" icon, on theProjects tab). This will open the "Samples, Demos and Templates" page, which includes an option to "Import Your HTML5 Code Base." Point to the root directory of your project. The Intel XDK will attempt to locate a file named index.html in your project and will set the "Source Directory" on the Projects tab to point to the directory that contains this file.

If your imported project did not contain an index.html file, your project may be unstable. In that case, it is best to delete the imported project from the Intel XDK Projects tab ("x" icon in the upper right corner of the screen), rename your "root" or "main" html file to index.html and import the project again. Several components in the Intel XDK depend on this assumption that the main HTML file in your project is named index.hmtl. See Introducing Intel® XDK Development Tools for more details.

It is highly recommended that your "source directory" be located as a sub-directory inside your "project directory." This insures that non-source files are not included as part of your build package when building your application. If the "source directory" and "project directory" are the same it results in longer upload times to the build server and unnecessarily large application executable files returned by the build system. See the following images for the recommended project file layout.

How do I completely uninstall the Intel XDK from my system?

Take the following steps to completely uninstall the XDK from your Windows system:

The steps below assume you installed into the "default" location. Version 3900 (and later) installs the user data files one level deeper, but using the locations specified will still find the saved user information and node-webkit cache files. If you did not install in the "default" location you will have to find the location you did install into and remove the files mentioned here from that location.

  • From the Windows Control Panel, remove the Intel XDK, using the Windows uninstall tool.

  • Then:
    > cd %LocalAppData%\Intel\XDK
    > rmdir /s /q .

  • Then:
    > cd %LocalAppData%\XDK
    > copy global-settings.xdk %UserProfile%
    > rmdir /s /q .
    > copy %UserProfile%\global-settings.xdk .

  • Then:
    -- Goto xdk.intel.com and select the download link.
    -- Download and install the new XDK.

If the Intel XDK is still listed as an app in the Windows Control Panel "Uninstall or change a program" list, find this entry in your registry (using regedit):

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Uninstall

Delete any sub-entries that refer to the Intel XDK. For example, a 3900 install will have this sub-key:

ARP_for_prd_xdk_0.0.3900

Use the following methods on a Linux or a Mac system:

  • On a Linux machine, run the uninstall script, typically /opt/intel/XDK/uninstall.sh.
     
  • Remove the directory into which the Intel XDK was installed.
    -- Typically /opt/intel or your home (~) directory on a Linux machine.
    -- Typically in the /Applications/Intel XDK.app directory on a Mac.
     
  • Then:
    $ find ~ -name global-settings.xdk
    $ cd <result-from-above> (for example ~/Library/Application Support/XDK/ on a Mac)
    $ cp global-settings.xdk ~
    $ rm -Rf *
    $ mv ~/global-settings.xdk .

     
  • Then:
    -- Goto xdk.intel.com and select the download link.
    -- Download and install the new XDK.

Is there a tool that can help me highlight syntax issues in Intel XDK?

Yes, you can use the various linting tools that can be added to the Brackets editor to review any syntax issues in your HTML, CSS and JS files. Go to the "File > Extension Manager..." menu item and add the following extensions: JSHint, CSSLint, HTMLHint, XLint for Intel XDK. Then, review your source files by monitoring the small yellow triangle at the bottom of the edit window (a green check mark indicates no issues).

How do I delete built apps and test apps from the Intel XDK build servers?

You can manage them by logging into: https://appcenter.html5tools-software.intel.com/csd/controlpanel.aspx. This functionality will eventually be available within Intel XDK after which access to app center will be removed.

I need help with the App Security API plugin; where do I find it?

Visit the primary documentation book for the App Security API and see this forum post for some additional details.

When I install my app or use the Debug tab Avast antivirus flags a possible virus, why?

If you are receiving a "Suspicious file detected - APK:CloudRep [Susp]" message from Avast anti-virus installed on your Android device it is due to the fact that you are side-loading the app (or the Intel XDK Debug modules) onto your device (using a download link after building or by using the Debug tab to debug your app), or your app has been installed from an "untrusted" Android store. See the following official explanation from Avast:

Your application was flagged by our cloud reputation system. "Cloud rep" is a new feature of Avast Mobile Security, which flags apks when the following conditions are met:

  1. The file is not prevalent enough; meaning not enough users of Avast Mobile Security have installed your APK.
  2. The source is not an established market (Google Play is an example of an established market).

If you distribute your app using Google Play (or any other trusted market) your users should not see any warning from Avast.

Following are some of the Avast anti-virus notification screens you might see on your device. All of these are perfectly normal, they are due to the fact that you must enable the installation of "non-market" apps in order to use your device for debug and the App IDs associated with your never published app or the custom debug modules that the Debug tab in the Intel XDK builds and installs on your device will not be found in a "established" (aka "trusted") market, such as Google Play.

If you choose to ignore the "Suspicious app activity!" threat you will not receive a threat for that debug module any longer. It will show up in the Avast 'ignored issues' list. Updates to an existing, ignored, custom debug module should continue to be ignored by Avast. However, new custom debug modules (due to a new project App ID or a new version of Crosswalk selected in your project's Build Settings) will result in a new warning from the Avast anti-virus tool.

  

  

How do I add a Brackets extension to the editor that is part of the Intel XDK?

The number of Brackets extensions that are provided in the built-in edition of the Brackets editor are limited to insure stability of the Intel XDK product. Not all extensions are compatible with the edition of Brackets that is embedded within the Intel XDK. Adding incompatible extensions can cause the Intel XDK to quit working.

Despite this warning, there are useful extensions that have not been included in the editor and which can be added to the Intel XDK. Adding them is temporary, each time you update the Intel XDK (or if you reinstall the Intel XDK) you will have to "re-add" your Brackets extension. To add a Brackets extension, use the following procedure:

  • exit the Intel XDK
  • download a ZIP file of the extension you wish to add
  • on Windows, unzip the extension here:
    %LocalAppData%\Intel\XDK\xdk\brackets\b\extensions\dev
  • on Mac OS X, unzip the extension here:
    /Applications/Intel\ XDK/Intel\ XDK.app/Contents/Resources/app.nw/brackets/b/extensions/dev
  • start the Intel XDK

Note that the locations given above are subject to change with new releases of the Intel XDK.

Why does my app or game require so many permissions on Android when built with the Intel XDK?

When you build your HTML5 app using the Intel XDK for Android or Android-Crosswalk you are creating a Cordova app. It may seem like you're not building a Cordova app, but you are. In order to package your app so it can be distributed via an Android store and installed on an Android device, it needs to be built as a hybrid app. The Intel XDK uses Cordova to create that hybrid app.

A pure Cordova app requires the NETWORK permission, it's needed to "jump" between your HTML5 environment and the native Android environment. Additional permissions will be added by any Cordova plugins you include with your application; which permissions are includes are a function of what that plugin does and requires.

Crosswalk for Android builds also require the NETWORK permission, because the Crosswalk image built by the Intel XDK includes support for Cordova. In addition, current versions of Crosswalk (12 and 14 at the time this FAQ was written)also require NETWORK STATE and WIFI STATE. There is an extra permission in some versions of Crosswalk (WRITE EXTERNAL STORAGE) that is only needed by the shared model library of Crosswalk, we have asked the Crosswalk project to remove this permission in a future Crosswalk version.

If you are seeing more than the following five permissions in your XDK-built Crosswalk app:

  • android.permission.INTERNET
  • android.permission.ACCESS_NETWORK_STATE
  • android.permission.ACCESS_WIFI_STATE
  • android.permission.INTERNET
  • android.permission.WRITE_EXTERNAL_STORAGE

then you are seeing permissions that have been added by some plugins. Each plugin is different, so there is no hard rule of thumb. The two "default" core Cordova plugins that are added by the Intel XDK blank templates (device and splash screen) do not require any Android permissions.

BTW: the permission list above comes from a Crosswalk 14 build. Crosswalk 12 builds do not included the last permission; it was added when the Crosswalk project introduced the shared model library option, which started with Crosswalk 13 (the Intel XDK does not support 13 builds).

How do I make a copy of an existing Intel XDK project?

If you just need to make a backup copy of an existing project, and do not plan to open that backup copy as a project in the Intel XDK, do the following:

  • Exit the Intel XDK.
  • Copy the entire project directory:
    • on Windows, use File Explorer to "right-click" and "copy" your project directory, then "right-click" and "paste"
    • on Mac use Finder to "right-click" and then "duplicate" your project directory
    • on Linux, open a terminal window, "cd" to the folder that contains your project, and type "cp -a old-project/ new-project/" at the terminal prompt (where "old-project/" is the folder name of your existing project that you want to copy and "new-project/" is the name of the new folder that will contain a copy of your existing project)

If you want to use an existing project as the starting point of a new project in the Intel XDK. The process described below will insure that the build system does not confuse the ID in your old project with that stored in your new project. If you do not follow the procedure below you will have multiple projects using the same project ID (a special GUID that is stored inside the Intel XDK <project-name>.xdk file in the root directory of your project). Each project in your account must have a unique project ID.

  • Exit the Intel XDK.
  • Make a copy of your existing project using the process described above.
  • Inside the new project that you made (that is, your new copy of your old project), make copies of the <project-name>.xdk file and <project-name>.xdke files and rename those copies to something like project-new.xdk and project-new.xdke (anything you like, just something different than the original project name, preferably the same name as the new project folder in which you are making this new project).
  • Using a TEXT EDITOR (only) (such as Notepad or Sublime or Brackets or some other TEXT editor), open your new "project-new.xdk" file (whatever you named it) and find the projectGuid line, it will look something like this:
    "projectGuid": "a863c382-ca05-4aa4-8601-375f9f209b67",
  • Change the "GUID" to all zeroes, like this: "00000000-0000-0000-000000000000"
  • Save the modified "project-new.xdk" file.
  • Open the Intel XDK.
  • Goto the Projects tab.
  • Select "Open an Intel XDK Project" (the green button at the bottom left of the Projects tab).
  • To open this new project, locate the new "project-new.xdk" file inside the new project folder you copied above.
  • Don't forget to change the App ID in your new project. This is necessary to avoid conflicts with the project you copied from, in the store and when side-loading onto a device.

My project does not include a www folder. How do I fix it so it includes a www or source directory?

The Intel XDK HTML5 and Cordova project file structures are meant to mimic a standard Cordova project. In a Cordova (or PhoneGap) project there is a subdirectory (or folder) named www that contains all of the HTML5 source code and asset files that make up your application. For best results, it is advised that you follow this convention, of putting your source inside a "source directory" inside of your project folder.

This most commonly happens as the result of exporting a project from an external tool, such as Construct2, or as the result of importing an existing HTML5 web app that you are converting into a hybrid mobile application (eg., an Intel XDK Corodova app). If you would like to convert an existing Intel XDK project into this format, follow the steps below:

  • Exit the Intel XDK.
  • Copy the entire project directory:
    • on Windows, use File Explorer to "right-click" and "copy" your project directory, then "right-click" and "paste"
    • on Mac use Finder to "right-click" and then "duplicate" your project directory
    • on Linux, open a terminal window, "cd" to the folder that contains your project, and type "cp -a old-project/ new-project/" at the terminal prompt (where "old-project/" is the folder name of your existing project that you want to copy and "new-project/" is the name of the new folder that will contain a copy of your existing project)
  • Create a "www" directory inside the new duplicate project you just created above.
  • Move your index.html and other source and asset files to the "www" directory you just created -- this is now your "source" directory, located inside your "project" directory (do not move the <project-name>.xdk and xdke files and any intelxdk.config.*.xml files, those must stay in the root of the project directory)
  • Inside the new project that you made above (by making a copy of the old project), rename the <project-name>.xdk file and <project-name>.xdke files to something like project-copy.xdk and project-copy.xdke (anything you like, just something different than the original project, preferably the same name as the new project folder in which you are making this new project).
  • Using a TEXT EDITOR (only) (such as Notepad or Sublime or Brackets or some other TEXT editor), open the new "project-copy.xdk" file (whatever you named it) and find the line named projectGuid, it will look something like this:
    "projectGuid": "a863c382-ca05-4aa4-8601-375f9f209b67",
  • Change the "GUID" to all zeroes, like this: "00000000-0000-0000-000000000000"
  • A few lines down find: "sourceDirectory": "",
  • Change it to this: "sourceDirectory": "www",
  • Save the modified "project-copy.xdk" file.
  • Open the Intel XDK.
  • Goto the Projects tab.
  • Select "Open an Intel XDK Project" (the green button at the bottom left of the Projects tab).
  • To open this new project, locate the new "project-copy.xdk" file inside the new project folder you copied above.

Can I install more than one copy of the Intel XDK onto my development system?

Yes, you can install more than one version onto your development system. However, you cannot run multiple instances of the Intel XDK at the same time. Be aware that new releases sometimes change the project file format, so it is a good idea, in these cases, to make a copy of your project if you need to experiment with a different version of the Intel XDK. See the instructions in a FAQ entry above regarding how to make a copy of your Intel XDK project.

Follow the instructions in this forum post to install more than one copy of the Intel XDK onto your development system.

On Apple OS X* and Linux* systems, does the Intel XDK need the OpenSSL* library installed?

Yes. Several features of the Intel XDK require the OpenSSL library, which typically comes pre-installed on Linux and OS X systems. If the Intel XDK reports that it could not find libssl, go to https://www.openssl.org to download and install it.

I have a web application that I would like to distribute in app stores without major modifications. Is this possible using the Intel XDK?

Yes, if you have a true web app or “client app” that only uses HTML, CSS and JavaScript, it is usually not too difficult to convert it to a Cordova hybrid application (this is what the Intel XDK builds when you create an HTML5 app). If you rely heavily on PHP or other server scripting languages embedded in your pages you will have more work to do. Because your Cordova app is not associated with a server, you cannot rely on server-based programming techniques; instead, you must rewrite any such code to user RESTful APIs that your app interacts with using, for example, AJAX calls.

What is the best training approach to using the Intel XDK for a newbie?

First, become well-versed in the art of client web apps, apps that rely only on HTML, CSS and JavaScript and utilize RESTful APIs to talk to network services. With that you will have mastered 80% of the problem. After that, it is simply a matter of understanding how Cordova plugins are able to extend the JavaScript API for access to features of the platform. For HTML5 training there are many sites providing tutorials. It may also help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, which will help you understand some of the differences between developing for a traditional server-based environment and developing for the Intel XDK hybrid Cordova app environment.

What is the best platform to start building an app with the Intel XDK? And what are the important differences between the Android, iOS and other mobile platforms?

There is no one most important difference between the Android, iOS and other platforms. It is important to understand that the HTML5 runtime engine that executes your app on each platform will vary as a function of the platform. Just as there are differences between Chrome and Firefox and Safari and Internet Explorer, there are differences between iOS 9 and iOS 8 and Android 4 and Android 5, etc. Android has the most significant differences between vendors and versions of Android. This is one of the reasons the Intel XDK offers the Crosswalk for Android build option, to normalize and update the Android issues.

In general, if you can get your app working well on Android (or Crosswalk for Android) first you will generally have fewer issues to deal with when you start to work on the iOS and Windows platforms. In addition, the Android platform has the most flexible and useful debug options available, so it is the easiest platform to use for debugging and testing your app.

Is my password encrypted and why is it limited to fifteen characters?

Yes, your password is stored encrypted and is managed by https://signin.intel.com. Your Intel XDK userid and password can also be used to log into the Intel XDK forum as well as the Intel Developer Zone. the Intel XDK does not store nor does it manage your userid and password.

The rules regarding allowed userids and passwords are answered on this Sign In FAQ page, where you can also find help on recovering and changing your password.

Why does the Intel XDK take a long time to start on Linux or Mac?

...and why am I getting this error message? "Attempt to contact authentication server is taking a long time. You can wait, or check your network connection and try again."

At startup, the Intel XDK attempts to automatically determine the proxy settings for your machine. Unfortunately, on some system configurations it is unable to reliably detect your system proxy settings. As an example, you might see something like this image when starting the Intel XDK.

On some systems you can get around this problem by setting some proxy environment variables and then starting the Intel XDK from a command-line that includes those configured environment variables. To set those environment variables, similar to the following:

$ export no_proxy="localhost,127.0.0.1/8,::1"
$ export NO_PROXY="localhost,127.0.0.1/8,::1"
$ export http_proxy=http://proxy.mydomain.com:123/
$ export HTTP_PROXY=http://proxy.mydomain.com:123/
$ export https_proxy=http://proxy.mydomain.com:123/
$ export HTTPS_PROXY=http://proxy.mydomain.com:123/

IMPORTANT! The name of your proxy server and the port (or ports) that your proxy server requires will be different than those shown in the example above. Please consult with your IT department to find out what values are appropriate for your site. Intel has no way of knowing what configuration is appropriate for your network.

If you use the Intel XDK in multiple locations (at work and at home), you may have to change the proxy settings before starting the Intel XDK after switching to a new network location. For example, many work networks use a proxy server, but most home networks do not require such a configuration. In that case, you need to be sure to "unset" the proxy environment variables before starting the Intel XDK on a non-proxy network.

After you have successfully configured your proxy environment variables, you can start the Intel XDK manually, from the command-line.

On a Mac, where the Intel XDK is installed in the default location, type the following (from a terminal window that has the above environment variables set):

$ open /Applications/Intel\ XDK.app/

On a Linux machine, assuming the Intel XDK has been installed in the ~/intel/XDK directory, type the following (from a terminal window that has the above environment variables set):

$ ~/intel/XDK/xdk.sh &

In the Linux case, you will need to adjust the directory name that points to the xdk.sh file in order to start. The example above assumes a local install into the ~/intel/XDK directory. Since Linux installations have more options regarding the installation directory, you will need to adjust the above to suit your particular system and install directory.

How do I generate a P12 file on a Windows machine?

See these articles:

How do I change the default dir for creating new projects in the Intel XDK?

You can change the default new project location manually by modifying a field in the global-settings.xdk file. Locate the global-settings.xdk file on your system (the precise location varies as a function of the OS) and find this JSON object inside that file:

"projects-tab": {"defaultPath": "/Users/paul/Documents/XDK","LastSortType": "descending|Name","lastSortType": "descending|Opened","thirdPartyDisclaimerAcked": true
  },

The example above came from a Mac. On a Mac the global-settings.xdk file is located in the "~/Library/Application Support/XDK" directory.

On a Windows machine the global-settings.xdk file is normally found in the "%LocalAppData%\XDK" directory. The part you are looking for will look something like this:

"projects-tab": {"thirdPartyDisclaimerAcked": false,"LastSortType": "descending|Name","lastSortType": "descending|Opened","defaultPath": "C:\\Users\\paul/Documents"
  },

Obviously, it's the defaultPath part you want to change.

BE CAREFUL WHEN YOU EDIT THE GLOBAL-SETTINGS.XDK FILE!! You've been warned...

Make sure the result is proper JSON when you are done, or it may cause your XDK to cough and hack loudly. Make a backup copy of global-settings.xdk before you start, just in case.

Where I can find recent and upcoming webinars list?

What network addresses must I enable in my firewall to insure the Intel XDK will work on my restricted network?

Normally, access to the external servers that the Intel XDK uses is handled automatically by your proxy server. However, if you are working in an environment that has restricted Internet access and you need to provide your IT department with a list of URLs that you need access to in order to use the Intel XDK, then please provide them with the following list of domain names:

  • appcenter.html5tools-software.intel.com (for communication with the build servers)
  • s3.amazonaws.com (for downloading sample apps and built apps)
  • download.xdk.intel.com (for getting XDK updates)
  • xdk-feed-proxy.html5tools-software.intel.com (for receiving the tweets in the upper right corner of the XDK)
  • signin.intel.com (for logging into the XDK)
  • sfederation.intel.com (for logging into the XDK)

Normally this should be handled by your network proxy (if you're on a corporate network) or should not be an issue if you are working on a typical home network.

Installing the Intel XDK on Windows fails with a "Package signature verification failed." message.

If you receive a "Package signature verification failed" message (see image below) when installing the Intel XDK on your system, it is likely due to one of the following two reasons:

  • Your system does not have a properly installed "root certificate" file, which is needed to confirm that the install package is good.
  • The install package is corrupt and failed the verification step.

The first case can happen if you are attempting to install the Intel XDK on an unsupported version of Windows. The Intel XDK is only supported on Microsoft Windows 7 and higher. If you attempt to install on Windows Vista (or earlier) you may see this verification error. The workaround is to install the Intel XDK on a Windows 7 or greater machine.

The second case is likely due to a corruption of the install package during download or due to tampering. The workaround is to re-download the install package and attempt another install.

If you are installing on a Windows 7 (or greater) machine and you see this message it is likely due to a missing or bad root certificate on your system. To fix this you may need to start the "Certificate Propagation" service. Open the Windows "services.msc" panel and then start the "Certificate Propagation" service. Additional links related to this problem can be found here > https://technet.microsoft.com/en-us/library/cc754841.aspx

See this forum thread for additional help regarding this issue > https://software.intel.com/en-us/forums/intel-xdk/topic/603992

Troubles installing the Intel XDK on a Linux or Ubuntu system, which option should I choose?

Choose the local user option, not root or sudo, when installing the Intel XDK on your Linux or Ubuntu system. This is the most reliable and trouble-free option and is the default installation option. This will insure that the Intel XDK has all the proper permissions necessary to execute properly on your Linux system. The Intel XDK will be installed in a subdirectory of your home (~) directory.

Connection Problems? -- Intel XDK SSL certificates update

On January 26, 2016 we updated the SSL certificates on our back-end systems to SHA2 certificates. The existing certificates were due to expire in February of 2016. We have also disabled support for obsolete protocols.

If you are experiencing persistent connection issues (since Jan 26, 2016), please post a problem report on the forum and include in your problem report:

  • the operation that failed
  • the version of your XDK
  • the version of your operating system
  • your geographic region
  • and a screen capture

How do I resolve build failure: "libpng error: Not a PNG file"?  

f you are experiencing build failures with CLI 5 Android builds, and the detailed error log includes a message similar to the following:

Execution failed for task ':mergeArmv7ReleaseResources'.> Error: Failed to run command: /Developer/android-sdk-linux/build-tools/22.0.1/aapt s -i .../platforms/android/res/drawable-land-hdpi/screen.png -o .../platforms/android/build/intermediates/res/armv7/release/drawable-land-hdpi-v4/screen.png

Error Code: 42

Output: libpng error: Not a PNG file

You need to change the format of your icon and/or splash screen images to PNG format.

The error message refers to a file named "screen.png" -- which is what each of your splash screens were renamed to before they were moved into the build project resource directories. Unfortunately, JPG images were supplied for use as splash screen images, not PNG images. So the files were renamed and found by the build system to be invalid.

Convert your splash screen images to PNG format. Renaming JPG images to PNG will not work! You must convert your JPG images into PNG format images using an appropriate image editing tool. The Intel XDK does not provide any such conversion tool.

Beginning with Cordova CLI 5, all icons and splash screen images must be supplied in PNG format. This applies to all supported platforms. This is an undocumented "new feature" of the Cordova CLI 5 build system that was implemented by the Apache Cordova project.

Why do I get a "Parse Error" when I try to install my built APK on my Android device?

Because you have built an "unsigned" Android APK. You must click the "signed" box in the Android Build Settings section of the Projects tab if you want to install an APK on your device. The only reason you would choose to create an "unsigned" APK is if you need to sign it manually. This is very rare and not the normal situation.

My converted legacy keystore does not work. Google Play is rejecting my updated app.

The keystore you converted when you updated to 3088 (now 3240 or later) is the same keystore you were using in 2893. When you upgraded to 3088 (or later) and "converted" your legacy keystore, you re-signed and renamed your legacy keystore and it was transferred into a database to be used with the Intel XDK certificate management tool. It is still the same keystore, but with an alias name and password assigned by you and accessible directly by you through the Intel XDK.

If you kept the converted legacy keystore in your account following the conversion you can download that keystore from the Intel XDK for safe keeping (do not delete it from your account or from your system). Make sure you keep track of the new password(s) you assigned to the converted keystore.

There are two problems we have experienced with converted legacy keystores at the time of the 3088 release (April, 2016):

  • Using foreign (non-ASCII) characters in the new alias name and passwords were being corrupted.
  • Final signing of your APK by the build system was being done with RSA256 rather than SHA1.

Both of the above items have been resolved and should no longer be an issue.

If you are currently unable to complete a build with your converted legacy keystore (i.e., builds fail when you use the converted legacy keystore but they succeed when you use a new keystore) the first bullet above is likely the reason your converted keystore is not working. In that case we can reset your converted keystore and give you the option to convert it again. You do this by requesting that your legacy keystore be "reset" by filling out this form. For 100% surety during that second conversion, use only 7-bit ASCII characters in the alias name you assign and for the password(s) you assign.

IMPORTANT: using the legacy certificate to build your Android app is ONLY necessary if you have already published an app to an Android store and need to update that app. If you have never published an app to an Android store using the legacy certificate you do not need to concern yourself with resetting and reconverting your legacy keystore. It is easier, in that case, to create a new Android keystore and use that new keystore.

If you ARE able to successfully build your app with the converted legacy keystore, but your updated app (in the Google store) does not install on some older Android 4.x devices (typically a subset of Android 4.0-4.2 devices), the second bullet cited above is likely the reason for the problem. The solution, in that case, is to rebuild your app and resubmit it to the store (that problem was a build-system problem that has been resolved).

How can I have others beta test my app using Intel App Preview?

Apps that you sync to your Intel XDK account, using the Test tab's green "Push Files" button, can only be accessed by logging into Intel App Preview with the same Intel XDK account credentials that you used to push the files to the cloud. In other words, you can only download and run your app for testing with Intel App Preview if you log into the same account that you used to upload that test app. This restriction applies to downloading your app into Intel App Preview via the "Server Apps" tab, at the bottom of the Intel App Preview screen, or by scanning the QR code displayed on the Intel XDK Test tab using the camera icon in the upper right corner of Intel App Preview.

If you want to allow others to test your app, using Intel App Preview, it means you must use one of two options:

  • give them your Intel XDK userid and password
  • create an Intel XDK "test account" and provide your testers with that userid and password

For security sake, we highly recommend you use the second option (create an Intel XDK "test account"). 

A "test account" is simply a second Intel XDK account that you do not plan to use for development or builds. Do not use the same email address for your "test account" as you are using for your main development account. You should use a "throw away" email address for that "test account" (an email address that you do not care about).

Assuming you have created an Intel XDK "test account" and have instructed your testers to download and install Intel App Preview; have provided them with your "test account" userid and password; and you are ready to have them test:

  • sign out of your Intel XDK "development account" (using the little "man" icon in the upper right)
  • sign into your "test account" (again, using the little "man" icon in the Intel XDK toolbar)
  • make sure you have selected the project that you want users to test, on the Projects tab
  • goto the Test tab
  • make sure "MOBILE" is selected (upper left of the Test tab)
  • push the green "PUSH FILES" button on the Test tab
  • log out of your "test account"
  • log into your development account

Then, tell your beta testers to log into Intel App Preview with your "test account" credentials and instruct them to choose the "Server Apps" tab at the bottom of the Intel App Preview screen. From there they should see the name of the app you synced using the Test tab and can simply start it by touching the app name (followed by the big blue and white "Launch This App" button). Staring the app this way is actually easier than sending them a copy of the QR code. The QR code is very dense and is hard to read with some devices, dependent on the quality of the camera in their device.

Note that when running your test app inside of Intel App Preview they cannot test any features associated with third-party plugins, only core Cordova plugins. Thus, you need to insure that those parts of your apps that depend on non-core Cordova plugins have been disabled or have exception handlers to prevent your app from either crashing or freezing.

I'm having trouble making Google Maps work with my Intel XDK app. What can I do?

There are many reasons that can cause your attempt to use Google Maps to fail. Mostly it is due to the fact that you need to download the Google Maps API (JavaScript library) at runtime to make things work. However, there is no guarantee that you will have a good network connection, so if you do it the way you are used to doing it, in a browser...

<script src="https://maps.googleapis.com/maps/api/js?key=API_KEY&sensor=true"></script>

...you may get yourself into trouble, in an Intel XDK Cordova app. See Loading Google Maps in Cordova the Right Way for an excellent tutorial on why this is a problem and how to deal with it. Also, it may help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, especially item #3, to get a better understanding of why you shouldn't use the "browser technique" you're familiar with.

An alternative is to use a mapping tool that allows you to include the JavaScript directly in your app, rather than downloading it over the network each time your app starts. Several Intel XDK developers have reported very good luck with the open-source JavaScript library named LeafletJS that uses OpenStreet as it's map database source.

You can also search the Cordova Plugin Database for Cordova plugins that implement mapping features, in some cases using native SDKs and libraries.

How do I fix "Cannot find the Intel XDK. Make sure your device and intel XDK are on the same wireless network." error messages?

You can either disable your firewall or allow access through the firewall for the Intel XDK. To allow access through the Windows firewall goto the Windows Control Panel and search for the Firewall (Control Panel > System and Security > Windows Firewall > Allowed Apps) and enable Node Webkit (nw or nw.exe) through the firewall

See the image below (this image is from a Windows 8.1 system).

Google Services needs my SHA1 fingerprint. Where do I get my app's SHA fingerprint?

Your app's SHA fingerprint is part of your build signing certificate. Specifically, it is part of the signing certificate that you used to build your app. The Intel XDK provides a way to download your build certificates directly from within the Intel XDK application (see the Intel XDK documentation for details on how to manage your build certificates). Once you have downloaded your build certificate you can use these instructions provided by Google, to extract the fingerprint, or simply search the Internet for "extract fingerprint from android build certificate" to find many articles detailing this process.

Why am I unable to test or build or connect to the old build server with Intel XDK version 2893?

This is an Important Note Regarding the use of Intel XDK Versions 2893 and Older!!

As of June 13, 2016, versions of the Intel XDK released prior to March 2016 (2893 and older) can no longer use the Build tab, the Test tab or Intel App Preview; and can no longer create custom debug modules for use with the Debug and Profile tabs. This change was necessary to improve the security and performance of our Intel XDK cloud-based build system. If you are using version 2893 or older, of the Intel XDK, you must upgrade to version 3088 or greater to continue to develop, debug and build Intel XDK Cordova apps.

The error message you see below, "NOTICE: Internet Connection and Login Required," when trying to use the Build tab is due to the fact that the cloud-based component that was used by those older versions of the Intel XDK work has been retired and is no longer present. The error message appears to be misleading, but is the easiest way to identify this condition. 

How do I run the Intel XDK on Fedora Linux?

See the instructions below, copied from this forum post:

$ sudo find xdk/install/dir -name libudev.so.0
$ cd dir/found/above
$ sudo rm libudev.so.0
$ sudo ln -s /lib64/libudev.so.1 libudev.so.0

Note the "xdk/install/dir" is the name of the directory where you installed the Intel XDK. This might be "/opt/intel/xdk" or "~/intel/xdk" or something similar. Since the Linux install is flexible regarding the precise installation location you may have to search to find it on your system.

Once you find that libudev.so file in the Intel XDK install directory you must "cd" to that directory to finish the operations as written above.

Additional instructions have been provided in the related forum thread; please see that thread for the latest information regarding hints on how to make the Intel XDK run on a Fedora Linux system.

The Intel XDK generates a path error for my launch icons and splash screen files.

If you have an older project (created prior to August of 2016 using a version of the Intel XDK older than 3491) you may be seeing a build error indicating that some icon and/or splash screen image files cannot be found. This is likely due to the fact that some of your icon and/or splash screen image files are located within your source folder (typically named "www") rather than in the new package-assets folder. For example, inspecting one of the auto-generated intelxdk.config.*.xml files you might find something like the following:

<icon platform="windows" src="images/launchIcon_24.png" width="24" height="24"/><icon platform="windows" src="images/launchIcon_434x210.png" width="434" height="210"/><icon platform="windows" src="images/launchIcon_744x360.png" width="744" height="360"/><icon platform="windows" src="package-assets/ic_launch_50.png" width="50" height="50"/><icon platform="windows" src="package-assets/ic_launch_150.png" width="150" height="150"/><icon platform="windows" src="package-assets/ic_launch_44.png" width="44" height="44"/>

where the first three images are not being found by the build system because they are located in the "www" folder and the last three are being found, because they are located in the "package-assets" folder.

This problem usually comes about because the UI does not include the appropriate "slots" to hold those images. This results in some "dead" icon or splash screen images inside the <project-name>.xdk file which need to be removed. To fix this, make a backup copy of your <project-name>.xdk file and then, using a CODE or TEXT editor (e.g., Notepad++ or Brackets or Sublime Text or vi, etc.), edit your <project-name>.xdk file in the root of your project folder.

Inside of your <project-name>.xdk file you will find entries that look like this:

"icons_": [
  {"relPath": "images/launchIcon_24.png","width": 24,"height": 24
  },
  {"relPath": "images/launchIcon_434x210.png","width": 434,"height": 210
  },
  {"relPath": "images/launchIcon_744x360.png","width": 744,"height": 360
  },

Find all the entries that are pointing to the problem files and remove those problem entries from your <project-name>.xdk file. Obviously, you need to do this when the XDK is closed and only after you have made a backup copy of your <project-name>.xdk file, just in case you end up with a missing comma. The <project-name>.xdk file is a JSON file and needs to be in proper JSON format after you make changes or it will not be read properly by the XDK when you open it.

Then move your problem icons and splash screen images to the package-assets folder and reference them from there. Use this technique (below) to add additional icons by using the intelxdk.config.additions.xml file.

<!-- alternate way to add icons to Cordova builds, rather than using XDK GUI --><!-- especially for adding icon resolutions that are not covered by the XDK GUI --><!-- Android icons and splash screens --><platform name="android"><icon src="package-assets/android/icon-ldpi.png" density="ldpi" width="36" height="36" /><icon src="package-assets/android/icon-mdpi.png" density="mdpi" width="48" height="48" /><icon src="package-assets/android/icon-hdpi.png" density="hdpi" width="72" height="72" /><icon src="package-assets/android/icon-xhdpi.png" density="xhdpi" width="96" height="96" /><icon src="package-assets/android/icon-xxhdpi.png" density="xxhdpi" width="144" height="144" /><icon src="package-assets/android/icon-xxxhdpi.png" density="xxxhdpi" width="192" height="192" /><splash src="package-assets/android/splash-320x426.9.png" density="ldpi" orientation="portrait" /><splash src="package-assets/android/splash-320x470.9.png" density="mdpi" orientation="portrait" /><splash src="package-assets/android/splash-480x640.9.png" density="hdpi" orientation="portrait" /><splash src="package-assets/android/splash-720x960.9.png" density="xhdpi" orientation="portrait" /></platform>

Upgrading to the latest version of the Intel XDK results in a build error with existing projects.

Some users have reported that by creating a new project, adding their plugins to that new project and then copying the www folder from the old project to the new project they are able to resolve this issue. Obviously, you also need to update your Build Settings in the new project to match those from the old project.

How do I generate my Android hash key for Facebook ads?

Please see this article for help.

Back to FAQs Main

Novosibirsk State University Gets More Efficient Numerical Simulation

$
0
0

Russia's Novosibirsk State University boosted a simulation tool’s performance by 3X with Intel® Parallel Studio, Intel® Advisor, and Intel® Trace Analyzer and Collector, cutting the standard time for calculating one problem from one week to just two days.

When researchers at the University were looking to develop and optimize a software tool for numerical simulation of magnetohydrodynamics (MHD) problems with hydrogen ionization—part of an astrophysical objects simulation (AstroPhi) project—they needed to optimize the tool’s performance on Intel® Xeon Phi™ processor-based hardware. The team turned to Intel® Advisor and Intel® Trace Analyzer and Collector. This resulted in a performance speed-up of 3X.

"The use of Intel® Advanced Vector Extensions for Intel® Xeon Phi™ processors gave us the maximum code performance compared with other architectures available on the market,” explained Igor Kulikov,
assistant professor.

Get the whole story in our new case study.
 

 

Installing Intel® MKL Cloudera* CDH Parcel

$
0
0

Intel® worked with Cloudera* to make it easy to use the Community forum supported Intel® Math Kernel Library (Intel® MKL) with Cloudera* CDH. This page provides general installation and support notes about Intel® MKL as it has been distributed via Cloudera* DHC described below.

These software development tools are also available as part of the Intel® Parallel Studio XE and Intel® System Studio products. These products include enterprise-level Intel® Online Service Center support.

Using Intel® MKL Parcel

Here is how to install the Intel® MKL Parcel

  1. In the Cloudera Manager Admin Console access the Parcels page by doing one of the following:
  • Click the Parcels indicator in the top navigation bar.
  • Click the Hosts in the top navigation bar, then the Parcels tab.
  1. Click the Configuration button on the Parcels page.
  2. In the Remote Parcel Repository URLs list, click plus symbol to open an additional row. Enter the path to Intel® MKL Parcel repository:
 

http://parcels.repos.intel.com/mkl/latest

  1. Click the Save Changes button
  2. Click the Check for New Parcels button
  3. In the Location selector, click Available Remotely. The latest Intel MKL parcel should be available for download.
  4. Click the Download button for Intel® MKL Parcel. By downloading Intel® MKL you agree to the terms and conditions stated in the End-User License Agreement (EULA).
  5. Click the Distribute button in order to distribute the parcel on all cluster nodes, when download is completed
  6. Click the Activate button to activate Intel MKL parcel on all cluster nodes, when distribution is completed. A pop-up indicates which services must be restarted to use the new parcel.
  7. Choose one of the following:
  • Restart - Activate the parcel and restart services affected by the parcel.
  • Activate Only - Active the parcel. You can restart services at a time that is convenient.
  1. Click OK.

Note: The repository URL shown above installs the latest version of Intel MKL parcel. To install an older version of the Intel MKL use the URL based on the following model:

 

http://parcels.repos.intel.com/mkl/<VERSION>.<UPDATE>.<BUILD_NUMBER>

The following variables are used in the repository URL: <VERSION>, <UPDATE>, <BUILD>. The available values for these variables are available in the table below:

 

<VERSION>

<UPDATE>

<BUILD_NUM>

Intel® MKL

2017

2

201

Example:

 

http://parcels.repos.intel.com/mkl/2017.2.201

You can find more information about Parcels installation at Managing Software Installation Using Cloudera Manager.

Have Questions?

Check out the FAQ
Or ask in our User Forums

Intel® Trace Analyzer and Collector 2017 Update 3 Readme

$
0
0

The Intel® Trace Analyzer and Collector for Linux* and Windows* is a low-overhead scalable event-tracing library with graphical analysis that reduces the time it takes an application developer to enable maximum performance of cluster applications.  This package is for users who developer on and build for Intel® 64 architectures on Linux* and Windows*, as well as customers running on Intel® Xeon Phi™.  The package also includes an option download on macOS* for analysis only.  You must have a valid license to download, install, and use this product.

The Intel® Trace Analyzer and Collector 2017 Update 3 for Linux* and Windows* packages are now ready for download.  The Intel® Trace Analyzer and Collector is only available as part of Intel® Parallel Studio XE Cluster Edition.

New in this release:

  • Various bug fixes for improved stability and usability

Refer to the Intel® Trace Analyzer and Collector Release Notes for more details.

Contents:

  • Intel® Trace Analyzer and Collector 2017 Update 3 for Linux*
    • l_itac_p_2017.3.030.tgz - A file containing the complete product installation for Linux* OS.
    • w_ita_p_2017.3.030.exe - A file containing the Graphical User Interface (GUI) installation for Windows* OS.
    • m_ita_p_2017.3.030.tgz - A file containing the Graphical User Interface (GUI) installation for macOS*.
  • Intel® Trace Analyzer and Collector 2017 Update 3 for Windows*
    • w_itac_p_2017.3.027.exe - A file containing the complete product installation for Windows* OS.
    • m_ita_p_2017.3.030.tgz - A file containing the Graphical User Interface (GUI) installation for macOS*.

Intel® MPI Library 2017 Update 3 Readme

$
0
0

The Intel® MPI Library is a high-performance interconnect-independent multi-fabric library implementation of the industry-standard Message Passing Interface, v3.1 (MPI-3.1) specification.  This package is for MPI users who develop on and build for Intel® 64 architectures on Linux* and Windows*, as well as customers running on the Intel® Xeon Phi™ product family.  You must have a valid license to download, install, and use this product.

The Intel® MPI Library 2017 Update 3 for Linux* and Windows* packages are now ready for download.  The Intel® MPI Library is available as a stand-alone product and as part of the Intel® Parallel Studio XE Cluster Edition.

New in this release:

  • Accelerated Intel® MPI Library startup for faster HPC application performance
  • Updated the default fabric list on systems with Intel® Omni-Path Architecture (Linux* only)
  • Performance tuning for latest Intel® Xeon® processors
  • Various bug fixes for improved stability and usability

Refer to the Intel® MPI Library Release Notes for more details.

Contents:

  • Intel® MPI Library 2017 Update 3 for Linux*
    • l_mpi_2017.3.196.tgz - A file containing the complete product installation for Linux* OS.
    • l_mpi-rt_2017.3.196.tgz - A file containing the free runtime environment installation for Linux* OS.
  • Intel® MPI Library 2017 Update 3 for Windows*
    • w_mpi_p_2017.3.210.exe - A file containing the complete product installation for Windows* OS.
    • w_mpi-rt_p_2017.3.210.exe - A file containing the complete product installation for Windows* OS.

Known issue collecting FLOPS data with Intel® Advisor 2017 Update 3

$
0
0

 

Problem:

FLOPS and all related data, including Roofline data are completely missed if a survey is collected with the –no-auto-finalize option.

Affected customers:

It should mostly affect our Intel® Xeon Phi™ processor (codename: Knights Landing) customers, because we recommend they perform remote finalization to avoid significant overheads.

Steps to reproduce:

  1. Collect survey with –no-auto-finalize option

  2. Collect FLOPS data

  3. Finalize/open results and discover that FLOPS and all related data are missing (including Roofline)

Root cause:

There is an issue with filtering of FLOPS data. The collector always looking for callstack information, even if this mode is disabled. However, this information is only available after survey finalization. Therefore, no data is collected if survey finalization was skipped.

Workarounds:

  1. Perform survey finalization before collecting FLOPS data. It may be done both on remotely or locally on the  Intel® Xeon Phi™ (not recommended because of the overhead)

  2. Same as above, but you can reuse the callstacks.def from <advisor_project_dir>\e000\callstacks.def for further collections of the same application (if there are no changes in the application modules/callstack, in other case some data may be missed)


Intel® Manycore Platform Software Stack for Intel® Xeon Phi™ Coprocessor x200

$
0
0

Summary of (latest) changes

This article describes the most recent changes that have been made to the Intel® Manycore Platform Software Stack (Intel® MPSS) 4.x. If you've subscribed to get update notifications, you can use this information to quickly determine whether these changes apply to you.

  • May 8, 2017, Intel® MPSS 4.4.0 Hotfix 1 released for Linux* and Windows*

‍‍About the Intel® Manycore Platform Software Stack 4.x

The Intel MPSS 4.x is necessary to run the Intel® Xeon Phi™ coprocessor x200. It has been tested to work with specific versions of 64-bit operating systems:

The readme files (referenced in the Downloads section) have more information on how to build and install the stack.

One important component of Intel MPSS is the Symmetric Communications Interface (SCIF). The SCIF is included in the RPM bundle. SCIF provides a mechanism for inter-node communications within a single platform. A node, for SCIF purposes, is defined as either an Intel® Xeon Phi™ coprocessor or the Intel® Xeon® processor. In particular, the SCIF abstracts the details of communicating over the PCI Express* bus. The SCIF APIs are callable from both user space (uSCIF) and kernel space (kSCIF).

Intel MPSS is downloadable from the sources below. Note that these packages include documentation and APIs (for example, the SCIF API).

For Linux systems, users can measure Intel® Xeon Phi™ processor and coprocessor x200 product family performance with a tool called micperf. micperf is designed to incorporate a variety of benchmarks into a simple user experience with a single interface for execution. For the coprocessor, the micperf package is distributed as an RPM file within Intel MPSS. The micperf User Guide (micperf_user_guide.pdf) can be found in <MPSS installed directory>/doc. The following table summarizes all the benchmarks that can be run with the micperf tool:

Benchmark

CLI Name

Target Operations

Component

Comments

Intel® Math Kernel Library (Intel® MKL) DGEMM

dgemm

Double-precision floating point

VFU

For the processor, micperf provides a MCDRAM and DDR version

Intel MKL SGEMM

sgemm

Single-precision floating point

VFU

For the processor, micperf provides a MCDRAM and DDR version

Intel MKL SMP Linpack

linpack

Double-precision floating point

VFU

 

SHOC Download*

shoc download

Bus transfer host to device

PCIe* bus

Only available for the coprocessor

SHOC Readback*

shoc readback

Bus transfer device to host

PCIe bus

Only available for the coprocessor

STREAM*

stream

Round-trip memory to registers

MCDRAM, GDDR and caches

For the processor, micperf provides a MCDRAM and DDR version

HPLinpack*

hplinpack

Double-precision floating point

VFU

Only available for the processor

HPCG*

hpcg

Double-precision floating point

VFU

Only available for the processor; requires Intel® MPI Library

Note: the Intel MPSS download files for Linux marked “.gz” should end in “.gz” when downloaded; most browsers leave the extension alone, but Internet Explorer* may rename the files. If this affects you, we recommend renaming the file to the proper extension after downloading.

‍‍Getting notified of future updates

If you want to receive updates when we publish a new Intel MPSS 4.x stack, add a comment at the bottom of this page.

Downloads

There are currently one major release available for the Intel MPSS 4.x. We recommend that new adopters start by using the 4.4 release. Support for each Intel MPSS release ends 6 months from the date it was posted, except for long-term support products.

 

Intel MPSS 4.4.0 HotFix 1 release for Linux

Intel® Manycore Platform Software Stack versionDownloads availableSize (range)MD5 Checksum

MPSS 4.4.0 Hotfix 1(released: May 8, 2017)

RHEL 7.3


214MB
8a015c38379b8be42c8045d3ceb44545
 

RHEL 7.2


214MB
694b7b908c12061543d2982750985d8b
 

SLES 12.2

213MB506ab12af774f78fa8e107fd7a4f96fd
 

SLES 12.1

213MBb8520888954e846e8ac8604d62a9ba96
 

SLES 12.0

213MB88a3a4415afae1238453ced7a0df28ea
 Card installer file (mpss-4.4.0-card.tar)761MBd26e26868297cea5fd4ffafe8d78b66e
 Source file (mpss-4.4.0-card-source.tar)514MB127713d06496090821b5bb3613c95b30

 

Documentation linkDescriptionLast Updated OnSize (approx)
releasenotes-linux.txtRelease Notes (English)May 201715KB
readme.txtReadme (includes installation instructions) for Linux (English)May 201717KB
mpss_user_guide.pdfMPSS User's guideMay 20173MB
eula.txtEnd User License Agreement (IMPORTANT: Read Before Downloading, Installing, or Using)May 201733KB
   

 

 

 

Intel MPSS 4.4.0 HotFix 1 release for Microsoft Windows

Intel® Manycore Platform Software Stack versionDownloads availableSizeMD5 Checksum

MPSS 4.4.0 Hotfix 1  (release May 8, 2017)

 mpss-4.4.0-windows.zip

1091MB204a65b36858842f472a37c77129eb53

 

Documentation linkDescriptionLast Updated OnSize
releasenotes-windows.txtEnglish - release notesMay 20177KB
readme-windows.pdfEnglish - readme for Microsoft* WindowsMay 2017399KB
mpss_users_guide_windowsMPSS User Guide for WindowsMay 20173MB
eula.txtEnd User License Agreement (IMPORTANT: Read Before Downloading, Installing, or Using)May 201733KB

 

‍‍Additional documentation

The Intel MPSS packages contain additional documentation for Linux: man pages and documents in /usr/share/doc/ (see myo, intel-coi-* and micperf-* directories).

‍‍Where to ask questions and get more information

The discussion forum at http://software.intel.com/en-us/forums/intel-many-integrated-core is available to join and discuss any enhancements or issues with Intel® MPSS.

Information about Intel MPSS security can be found here. 

You can also find support collaterals here or submit an issue.

Intel® Xeon Phi™ Coprocessor x200 Quick Start Guide

$
0
0

Introduction

This document introduces the basic concept of the Intel® Xeon Phi™ coprocessor x200 product family, tells how to install the coprocessor software stack, discusses the build environment, and points to important documents so that you can write code and run applications.

The Intel Xeon Phi coprocessor x200 is the second generation of the Intel Xeon Phi product family. Unlike the first generation running on an embedded Linux* uOS, this second generation supports the standard Linux kernel. The Intel Xeon Phi coprocessor x200 is designed for installation in a third-generation PCI Express* (PCIe*) slot of an Intel® Xeon® processor host. The following figure shows a typical configuration:

 Intel Xeon Phi coprocessor x200 architecture

Benefits of the Intel Xeon Phi coprocessor:

  • System flexibility: Build a system that can support a wide range of applications, from serial to highly parallel, while leveraging code optimized for Intel Xeon processors or Intel Xeon Phi processors.
  • Maximize density: Gain significant performance improvements with limited acquisition cost by maximizing system density.
  • Upgrade path: Improve performance by adding to an Intel Xeon processor system or upgrading from the first generation of the Intel Xeon Phi product family with minimum code changes.

For workloads that fit within 16 GB coprocessor memory, adding a coprocessor to a host server allows customers to avoid costly networking. For workloads that have a significant portion of highly parallel phases, offload can offer significant performance with minimal code optimization investment.

Additional Documentation

Basic System Architecture

The Intel Xeon Phi coprocessor x200 is based on a modern Intel® Atom™ microarchitecture with considerable high performance computing (HPC)-focused performance improvements. It has up to 72 cores with four threads per core, giving a total of 288 CPUs as viewed by the operating system, and has up to 16 GB of high-bandwidth on-package MCDRAM memory that provides over 500 GB/s effective bandwidth. The coprocessor has an x16 PCI Express Gen3 interface (8 GT/s) to connect to the host system.

The cores are laid out in units called tiles. Each tile contains a pair of cores, a shared 1 MB L2 cache, and a hub connecting the tile to a mesh interface. Each core contains two 512-bit wide vector processing units. The coprocessor supports Intel® AVX-512F (foundation), Intel AVX-512CD (conflict detection), Intel AVX-512PF (prefetching), and Intel AVX-512ER (exponential reciprocal) ISA.

The coprocessor supports Intel® AVX-512F (foundation), Intel AVX-512CD (conflict detection), Intel AVX-512PF (prefetching), and Intel AVX-512ER (exponential reciprocal) ISA

Intel® Manycore Platform Software Stack

Intel® Manycore Platform Software Stack (Intel® MPSS) is the user and system software that allows programs to run on and communication with the Intel Xeon Phi coprocessor. Intel MPSS version 4.x.x is used for the Intel Xeon Phi coprocessor x200 and can be download from here [(https://software.intel.com/en-us/articles/intel-manycore-platform-software-stack-for-intel-xeon-phi-coprocessor-x200)]. (Note that the older Intel MPSS version 3.x.x is used for the Intel Xeon Phi coprocessor x100); standard Linux kernel running on the coprocessor.

You can download the Intel MPSS stack at https://software.intel.com/en-us/articles/intel-manycore-platform-software-stack-for-intel-xeon-phi-coprocessor-x200. The following host operating systems are supported: Red Hat* Enterprise Linux Server, SUSE* Linux Enterprise Server and Microsoft Windows*. For detailed information on requirements and on installation, please consult the README file for Intel MPSS. The figure below shows the high-level representation of the Intel MPSS. The host software stack is on the left and the coprocessor software stack is on the right.

 High-level representation of the Intel MPSS.

Install the Software Stack and Start the Coprocessor

Installation Guide for Linux* Host:

  1. From the “Intel Manycore Platform Software Stack for Intel Xeon Phi Coprocessor x200 (https://software.intel.com/en-us/articles/intel-manycore-platform-software-stack-for-intel-xeon-phi-coprocessor-x200), navigate to the latest version of the Intel MPSS release for Linux and download “Readme for Linux (English)” (README.txt). Also download the release notes (releasenotes-linux.txt) and the User’s Guide for Intel MPSS.
  2. Install one of the following supported operating systems in the host:
    • Red Hat Enterprise Linux Server 7.2 64-bit kernel 3.10.0-327
    • Red Hat Enterprise Linux Server 7.3 64-bit kernel 3.10.0-514
    • SUSE Linux Enterprise Server SLES 12 kernel 3.12.28-4-default
    • SUSE Linux Enterprise Server SLES 12 SP1 kernel 3.12.49-11-default
    • SUSE Linux Enterprise Server SLES 12 SP2 kernel 4.4.21-69-default

    Be sure to install ssh, which is used to log in to the card.

    WARNING: On installing Red Hat, it may automatically update you to a new version of the Linux kernel. If this happens, you will not be able to use the prebuilt host driver, but will need to rebuild it manually for the new kernel version. Please see Section 5 in the readme.txt for instructions on building an Intel MPSS host driver for a specific Linux kernel.

  3. Log in as root.
  4. Download the release driver appropriated for your operating system in Step 1 (<mpss-version>-linux.tar), where <mpss-4> is mpss-4.3.3 at the time this document was written.
  5. Install the host driver RPMs as detailed in Section 6 of readme.txt. Don’t skip the creation of configuration files for your coprocessor.
  6. Update the flash on your coprocessor(s) as detailed in Section 8 of readme.txt.
  7. Reboot the system.
  8. Start the Intel Xeon Phi coprocessor (you can set up the card to start with the host system; it will not do so by default), and then run micinfo to verify that it is set up properly:
    # systemctl start mpss
    # micctrl –w
    # /usr/bin/micinfo
    micinfo Utility Log
    Created On Mon Apr 10 12:14:08 2017
    
    System Info:
        Host OS                        : Linux
        OS Version                     : 3.10.0-327.el7.x86_64
        MPSS Version                   : 4.3.2.5151
        Host Physical Memory           : 128529 MB
    
    Device No: 0, Device Name: mic0 [x200]
    
    Version:
        SMC Firmware Version           : 121.27.10198
        Coprocessor OS Version         : 4.1.36-mpss_4.3.2.5151 GNU/Linux
        Device Serial Number           : QSKL64000441
        BIOS Version                   : GVPRCRB8.86B.0012.R02.1701111545
        BIOS Build date                : 01/11/2017
        ME Version                     : 3.2.2.4
    
    Board:
        Vendor ID                      : 0x8086
        Device ID                      : 0x2260
        Subsystem ID                   : 0x7494
        Coprocessor Stepping ID        : 0x01
        UUID                           : A03BAF9B-5690-E611-8D4F-001E67FC19A4
        PCIe Width                     : x16
        PCIe Speed                     : 8.00 GT/s
        PCIe Ext Tag Field             : Disabled
        PCIe No Snoop                  : Enabled
        PCIe Relaxed Ordering          : Enabled
        PCIe Max payload size          : 256 bytes
        PCIe Max read request size     : 128 bytes
        Coprocessor Model              : 0x57
        Coprocessor Type               : 0x00
        Coprocessor Family             : 0x06
        Coprocessor Stepping           : B0
        Board SKU                      : B0 SKU _NA_A
        ECC Mode                       : Enabled
        PCIe Bus Information           : 0000:03:00.0
        Coprocessor SMBus Address      : 0x00000030
        Coprocessor Brand              : Intel(R) Corporation
        Coprocessor Board Type         : 0x0a
        Coprocessor TDP                : 300.00 W
    
    Core:
        Total No. of Active Cores      : 68
        Threads per Core               : 4
        Voltage                        : 900.00 mV
        Frequency                      : 1.20 GHz
    
    Thermal:
        Thermal Dissipation            : Active
        Fan RPM                        : 6000
        Fan PWM                        : 100 %
        Die Temp                       : 38 C
    
    Memory:
        Vendor                         : INTEL
        Size                           : 16384.00 MB
        Technology                     : MCDRAM
        Speed                          : 6.40 GT/s
        Frequency                      : 6.40 GHz
        Voltage                        : Not Available

Installation Guide for Windows* Host:

  1. From the “Intel Manycore Platform Software Stack for Intel Xeon Phi Coprocessor x200 (https://software.intel.com/en-us/articles/intel-manycore-platform-software-stack-for-intel-xeon-phi-coprocessor-x200), navigate to the latest version of the Intel MPSS release for Microsoft Windows. Download “Readme file for Microsoft Windows” (readme-windows.pdf). Also download the “Release notes” (releaseNotes-windows.txt) and the “Intel MPSS User’s Guide” (MPSS_Users_Guide-windows.pdf).
  2. Install one of the following supported operating systems in the host:
    • Microsoft Windows 8.1 (64-bit)
    • Microsoft Windows® 10 (64-bit)
    • Microsoft Windows Server 2012 R2 (64-bit)
    • Microsoft Windows Server 2016 (64-bit)
  3. Log in as “administrator”.
  4. Install .NET Framework* 4.5 or higher on the system (http://www.microsoft.com/net/download), Python* 2.7.5 x86-64 or higher (Python 3.x is not supported), Pywin32 build or higher (https://sourceforge.net/projects/pywin32).
  5. Be sure to install PuTTY* and PuTTYgen*, which are used to log in to the card’s OS.
  6. Follow the preliminary steps as instructed in Section 2.2.1 of the Readme file.
  7. Restart the system.
  8. Download the drivers package mpss-4.*-windows.zip for your Windows operating system from the page described in Step 1.
  9. Unzip the zip file to get the Windows exec files (“mpss-4.*.exe” and “mpss-essentials-4*.exe”).
  10. Install the Windows Installer file “mpss-4.*.exe” as detailed in Section 3.2 of the User’s Guide. Note that if a previous version of the Intel Xeon Phi coprocessor stack is already installed, use Windows Control Panel to uninstall it prior to installing the current version. By default, Intel MPSS is installed in “c:\Program Files\Intel\MPSS”. Also, install “mpss-essentials-4*.exe”, the native binary utilities for the Intel Xeon Phi coprocessor. These are required when using offload programming or cross compilers.
  11. Confirm that the new Intel MPSS stack is successfully installed by looking at Control Panel > Programs > Programs and Features: Intel Xeon Phi (see the following illustrations).

    Programs > Programs and Features: Intel Xeon Phi " src="https://software.intel.com/sites/default/files/managed/40/98/x200-quick-start-figure-4.png" typeof="foaf:Image" data-fid="550160">

  12. Update the flash according to Section 2.2.3 of the readme-windows.pdf file.
  13. Reboot the system.
  14. Log in to the host and verify that the Intel Xeon Phi x200 coprocessors are detected by the Device Manager (Control Panel > Hardware > Device Manager, and click “System devices”):

    Hardware > Device Manager, and click “System devices”):" src="https://software.intel.com/sites/default/files/managed/e9/f7/x200-quick-start-figure-5.png" typeof="foaf:Image" data-fid="550162">
  15. Start the Intel Xeon Phi coprocessor (you can set up the card to start with the host system; it will not do so by default). Launch a command-prompt window and start the Intel MPSS stack:
        prompt> micctrl --start
  16. Run the command “micinfo” to verify that it is set up properly:
        prompt> micinfo.exe

Intel® Parallel Studio XE

After starting the Intel MPSS stack, users can write applications running on the coprocessor using Intel Parallel Studio XE.

Intel Parallel Studio XE is a software development suite that helps boost application performance by taking advantage of the ever-increasing processor core count and vector register width available in Intel Xeon processors, Intel Xeon Phi processors and coprocessors, and other compatible processors. Starting with the Intel Parallel Studio 2018 beta, the following Intel® products support program development on the Intel Xeon Phi coprocessor x200:

  • Intel® C Compiler/Intel® C++ Compiler/Intel® Fortran Compiler
  • Intel® Math Kernel Library (Intel® MKL)
  • Intel® Data Analytics Acceleration Library (Intel® DAAL)
  • Intel® Integrated Performance Primitives (Intel® IPP)
  • Intel® Cilk™ Plus
  • Intel® Threading Building Blocks (Intel® TBB)
  • Intel® VTune™ Amplifier XE
  • Intel® Advisor XE
  • Intel® Inspector XE
  • Intel® MPI Library
  • Intel® Trace Analyzer and Collector
  • Intel® Cluster Ready
  • Intel® Cluster Checker

To get started writing programs running on the coprocessor, you can get the code samples at https://software.intel.com/en-us/product-code-samples. The packages “Intel Parallel Studio XE for Linux - Sample Bundle”, and “Intel Parallel Studio XE for Windows - Sample Bundle” contain code samples for Linux and Windows, respectively.

Programming Models on Coprocessor

There are three programing models that can be used for the Intel Xeon Phi coprocessor x200: offload programing model, symmetric programing model, and native programing model.

  • Offload programing: The main application runs on the host, and offload selected, highly parallel portions of the program to the coprocessor(s) to take advantage of manycore architecture. The serial portion of the program still runs in the host to take advantage of big cores architecture.
  • Symmetric programming: The coprocessors and the host are treated as separate nodes. This model is suitable for distributed computing.
  • Native programming: The coprocessors are used as independent nodes, just like a host. Users compile the binary for the coprocessor in the host, transfer the binary, and log in the coprocessor to run the binary.

The figure below summarizes different programming models used for the Intel Xeon Phi coprocessor:

Use Intel SGX Templates for the GNU* Autoconf* Build System

$
0
0

GNU* Autoconf* is a popular build system that sees extensive use for Linux* source code packages. It produces a consistent, easy-to-use, and well-understood configuration script that allows end users and systems integrators to tailor software packages for their installation environments, almost always without any manual intervention. To create a configure script, the software developer creates a template file consisting of a series of macros that define the software package configuration needs, and then processes it with the Autoconf utility. GNU Autoconf provides convenient automation and standardization for common, and often tedious, tasks such as building Makefiles and configurable header files.

One of the key features of the Autoconf system is that it is extensible. Software developers can create macros that expand its functionality in order to support customized build and configuration needs. In this article, we introduce a set of macros and Makefile templates that do exactly this: Extend the functionality of Autoconf to simplify the process of building software that makes use of Intel® Software Guard Extensions (Intel® SGX). The templates themselves, along with a sample application source tree that makes use of them, are provided as a download.

Overview

The Intel SGX templates for the GNU Autoconf package contain four files:

  • README
  • aclocal.m4
  • sgx-app.mk.in
  • sgx-enclave.mk.in

README

The README file has detailed information on the Autoconf macros and Makefile rules and variables that make up the templates. It is a reference document, while this article functions more as a “how to” guide.

aclocal.m4

This is where the macros for extending Autoconf are defined. This file can be used as-is, appended to an existing aclocal.m4, or renamed for integration with GNU Automake*.

sgx-app.mk.in

This file builds to “sgx-app.mk” and contains Makefile rules and definitions for building Intel SGX applications. It is intended to be included (via an “include” directive) from the Makefile(s) that produce an executable object that includes one or more Intel SGX enclaves.

sgx-enclave.mk.in

This file builds to “sgx-enclave.mk” and contains Makefile rules and definitions for building Intel SGX enclaves. It must be included (via an “include” directive) from Makefiles that produce an Intel SGX enclave object (*.signed.so file in Linux).

Because this file contains build targets, you should place the include directive after the default build target in the enclave’s Makefile.in.

Creating configure.ac

Start by including the macro SGX_INIT in your configure.ac. This macro is required in order to set up the build system for Intel SGX, and it does the following:

  • Adds several options to the final configure script that let the user control aspects of the build.
  • Attempts to discover the location of the Intel SGX SDK.
  • Creates sgx_app.mk from sgx_app.mk.in.

SGX_INIT also defines a number of Makefile substitution variables. The ones most likely to be needed by external Makefiles are:

enclave_libdirInstallation path for enclave libraries/objects. Defaults to $EPREFIX/lib.
SGX_URTS_LIBThe untrusted runtime library name. When the project is built in simulation mode it automatically includes the _sim suffix.
SGX_UAE_SERVICE_LIBThe untrusted AE service library name. When the project is built in simulation mode it automatically includes the _sim suffix.
SGXSDKThe location of the Intel® SGX SDK.
SGXSDK_BINDIRThe directory containing Intel SGX SDK utilities.
SGXSDK_INCDIRThe location of Intel SGX SDK header files.
SGXSDK_LIBDIRThe directory containing the Intel SGX SDK libraries needed during linking.

The SGX_INIT macro does not take any arguments.

AC_INIT(sgxautosample, 1.0, john.p.mechalas@intel.com)

AC_PROG_CC()
AC_PROG_CXX()
AC_PROG_INSTALL()

AC_CONFIG_HEADERS([config.h])

SGX_INIT()

AC_CONFIG_FILES([Makefile])

AC_OUTPUT()

Next, define the enclaves. Each enclave is expected to have a unique name, and should be located in a subdirectory that is named after it. Specify the enclaves using the SGX_ADD_ENCLAVES macro. It takes one or two arguments:

  1. (required) The list of enclave names.
  2. (optional) The parent directory where the enclave subdirectories can be found. This defaults to “.”, the current working directory, if omitted.

Note that you can invoke this macro multiple times if your project has multiple enclaves and they do not share a common parent directory. Enclave names should not include spaces or slashes.

AC_INIT(sgxautosample, 1.0, john.p.mechalas@intel.com)

AC_PROG_CC()
AC_PROG_CXX()
AC_PROG_INSTALL()

AC_CONFIG_HEADERS([config.h])

SGX_INIT()

# Add enclave named “EnclaveHash” in the EnclaveHash/ directory
SGX_ADD_ENCLAVES([EnclaveHash])

AC_CONFIG_FILES([Makefile])

AC_OUTPUT()

In addition to defining the enclaves, this macro does the following:

  • Builds sgx_enclave.mk from sgx_enclave.mk.in.
  • Builds the Makefiles in each enclave subdirectory from their respective Makefile.in sources.

Enclave Makefiles

Each enclave’s Makefile needs to include the global sgx_enclave.mk rules file in order to inherit the rules, targets, and variables that automate enclave builds. Each Enclave must abide by the following rules:

  • The enclave must be in its own subdirectory.
  • The name of the subdirectory must match the name of the enclave (for example, an enclave named EnclaveCrypto must be placed in a subdirectory named EnclaveCrypto).
  • The EDL file for the enclave must also match the enclave name (for example, EnclaveCrypto.edl).
  • The Makefile must define the name of the enclave in a variable named ENCLAVE (for example, ENCLAVE=EnclaveCrypto).

The sgx_enclave.mk file defines a number of variables for you to use in the enclave’s Makefile:

ENCLAVE_CLEANA list of files that should be removed during 'make clean'.
ENCLAVE_CPPFLAGSC preprocessor flags.
ENCLAVE_CXXFLAGSC++ compiler flags necessary for building an enclave.
ENCLAVE_DISTCLEANA list of files that should be removed during 'make distclean'.
ENCLAVE_LDFLAGSLinker flags for generating the enclave .so.
ENCLAVE_TOBJThe trusted object file $(ENCLAVE)_t.o that is auto-generated by the sgx_edger8r tool. Include this in your enclave link line and the enclave build dependencies.

Here’s the Makefile.in for the enclave in the sample application included with the templates:

CC=@CC@
CFLAGS=@CFLAGS@
CPPFLAGS=@CPPFLAGS@
LDFLAGS=@LDFLAGS@

INSTALL=@INSTALL@
prefix=@prefix@
exec_prefix=@exec_prefix@
bindir=@bindir@
libdir=@libdir@
enclave_libdir=@enclave_libdir@

ENCLAVE=EnclaveHash

OBJS=$(ENCLAVE).o

%.o: %.c
        $(CC) $(CPPFLAGS) $(ENCLAVE_CPPFLAGS) $(CFLAGS) $(ENCLAVE_CFLAGS) -c $<

all: $(ENCLAVE).so

install: all
        $(INSTALL) -d $(enclave_libdir)
        $(INSTALL) -t $(enclave_libdir) $(ENCLAVE_SIGNED)

include ../sgx_enclave.mk

$(ENCLAVE).so: $(ENCLAVE_TOBJ) $(OBJS)
        $(CC) $(CFLAGS) -o $@ $(ENCLAVE_TOBJ) $(OBJS) $(LDFLAGS) $(ENCLAVE_LDFLAGS)

clean:
        rm -f $(OBJS) $(ENCLAVE_CLEAN)

distclean: clean
        rm -f Makefile $(ENCLAVE_DISTCLEAN)

Application Makefiles

Application components that reference enclaves need to include sgx_app.mk in their Makefile. It defines a number of rules, targets, and variables to assist with the build.

To get a list of all the enclaves in the project, the Makefile must define a list variable from the @SGX_ENCLAVES@ substitution variable that is set by Autoconf:

SGX_ENCLAVES:=@SGX_ENCLAVES@

This should be included as a build target as well, to ensure that all enclaves are built along with the application.

all: enclavetest $(SGX_ENCLAVES)

The variables most likely to be needed by the application’s Makefile are:

ENCLAVE_CLEANA list of files that should be removed during 'make clean'.
ENCLAVE_UOBJSThe untrusted object files $(ENCLAVE)_u.o that are auto-generated by the sgx_edger8r tool. Include these in your application link line and the enclave build dependencies.
ENCLAVE_UDEPSThe untrusted source and header files that are auto-generated by the sgx_edger8r tool. Include these in your compilation dependencies when building your application.

Here’s the Makefile for the sample application that is bundled with the templates:

SGX_ENCLAVES:=@SGX_ENCLAVES@

CC=@CC@
CFLAGS=@CFLAGS@ -fno-builtin-memsetqq
CPPFLAGS=@CPPFLAGS@
LDFLAGS=@LDFLAGS@ -L$(SGXSDK_LIBDIR)
LIBS=@LIBS@

INSTALL=@INSTALL@
prefix=@prefix@
exec_prefix=@exec_prefix@
bindir=@bindir@
libdir=@libdir@
enclave_libdir=@enclave_libdir@

APP_OBJS=main.o

%.o: %.c
        $(CC) -c $(CPPFLAGS) $(CFLAGS) -I$(SGXSDK_INCDIR) $<

all: enclavetest $(SGX_ENCLAVES)

install: install-program install-enclaves

install-program: all
        $(INSTALL) -d $(bindir)
        $(INSTALL) -t $(bindir) enclavetest

install-enclaves:
        for dir in $(SGX_ENCLAVES); do \
                $(MAKE) -C $$dir install; \
        done

include sgx_app.mk

enclavetest: $(ENCLAVE_UOBJS) $(APP_OBJS)
        $(CC) -o $@ $(LDFLAGS) $(APP_OBJS) $(ENCLAVE_UOBJS) $(LIBS) -l$(SGX_URTS_LIB)

clean: clean_enclaves
        rm -f enclavetest $(APP_OBJS) $(ENCLAVE_CLEAN)

distclean: clean distclean_enclaves
        rm -rf Makefile config.log config.status config.h autom4te.cache
        rm -rf sgx_app.mk sgx_enclave.mk

Note that the link line for the application references the sgx_urts library via the Makefile variable $(SGX_URTS_LIB). This is to support builds made in simulation mode: The variable will automatically append the _sim suffix to the library names so that the Makefile doesn’t have to define multiple build targets. Always use the variables $(SGX_URTS_LIB) and $(SGX_UAE_SERVICE_LIB) in your Makefile instead of the actual library names.

Running the Configure Script

When the configure.ac file is processed by Autoconf, the resulting configure script will have some additional command-line options. These are added by the SGX_INIT macro:

--enable-sgx-simulation

Build the project in simulation mode. This is for running and testing Intel SGX applications on hardware that does not support Intel SGX instructions.

--with-enclave-libdir-path=path

Specify where enclave libraries should be installed, and set the enclave_libdir substitution variable in Makefiles. The default is $EPREFIX/lib.

--with-sgx-build=debug|prerelease|release

Specify whether to build the Intel SGX application in debug, prerelease, or release mode. The default is to build in debug mode.

See the Intel SGX SDK for information on the various build modes. Note that you cannot mix release or prerelease modes with the --enable-sgx-simulation option.

--with-sgxsdk=path

Specify the Intel SGX SDK installation directory. This overrides the auto-detection procedure.

Summary and Future Work

These templates simplify the process of integrating the GNU build system with Intel SGX projects. They eliminate tedious, redundant coding, relieve the developer of the burden of remembering and entering the numerous libraries and compiler and linker flags needed to build Intel SGX enclaves, and automate the execution of supporting tools such as sgx_edger8r and sgx_sign.

While this automation and integration is valuable, there is still a non-trivial amount of effort required to set up the project environment. Further automation might be possible through the use of GNU Automake, which is designed to generate the Makefile templates that are in turn processed by Autoconf.

The build environment for Intel SGX applications can be complicated. Integration with build systems such as GNU Autoconfig, and potentially Automake, can save the developer considerable time and make their projects less prone to errors.

TensorFlow* Optimizations on Modern Intel® Architecture

$
0
0

Intel: Elmoustapha Ould-Ahmed-Vall, Mahmoud Abuzaina, Md Faijul Amin, Jayaram Bobba, Roman S Dubtsov, Evarist M Fomenko, Mukesh Gangadhar, Niranjan Hasabnis, Jing Huang, Deepthi Karkada, Young Jin Kim, Srihari Makineni, Dmitri Mishura, Karthik Raman, AG Ramesh, Vivek V Rane, Michael Riera, Dmitry Sergeev, Vamsi Sripathi, Bhavani Subramanian, Lakshay Tokas, Antonio C Valles

Google: Andy Davis, Toby Boyd, Megan Kacholia, Rasmus Larsen, Rajat Monga, Thiru Palanisamy, Vijay Vasudevan, Yao Zhang

TensorFlow* is a leading deep learning and machine learning framework, which makes it important for Intel and Google to ensure that it is able to extract maximum performance from Intel’s hardware offering. This paper introduces the Artificial Intelligence (AI) community to TensorFlow optimizations on Intel® Xeon® and Intel® Xeon Phi™ processor-based platforms. These optimizations are the fruit of a close collaboration between Intel and Google engineers announced last year by Intel’s Diane Bryant and Google’s Diane Green at the first Intel AI Day.

We describe the various performance challenges that we encountered during this optimization exercise and the solutions adopted. We also report out performance improvements on a sample of common neural networks models. These optimizations can result in orders of magnitude higher performance. For example, our measurements are showing up to 70x higher performance for training and up to 85x higher performance for inference on Intel® Xeon Phi™ processor 7250 (KNL). Intel® Xeon® processor E5 v4 (BDW) and Intel Xeon Phi processor 7250 based platforms, they lay the foundation for next generation products from Intel. In particular, users are expected to see improved performance on Intel Xeon (code named Skylake) and Intel Xeon Phi (code named Knights Mill) coming out later this year.

Optimizing deep learning models performance on modern CPUs presents a number of challenges not very different from those seen when optimizing other performance-sensitive applications in High Performance Computing (HPC):

  1. Code refactoring needed to take advantage of modern vector instructions. This means ensuring that all the key primitives, such as convolution, matrix multiplication, and batch normalization are vectorized to the latest SIMD instructions (AVX2 for Intel Xeon processors and AVX512 for Intel Xeon Phi processors).
  2. Maximum performance requires paying special attention to using all the available cores efficiently. Again this means looking at parallelization within a given layer or operation as well as parallelization across layers.
  3. As much as possible, data has to be available when the execution units need it. This means balanced use of prefetching, cache blocking techniques and data formats that promote spatial and temporal locality.

To meet these requirements, Intel developed a number of optimized deep learning primitives that can be used inside the different deep learning frameworks to ensure that we implement common building blocks efficiently. In addition to matrix multiplication and convolution, these building blocks include:

  • Direct batched convolution
  • Inner product
  • Pooling: maximum, minimum, average
  • Normalization: local response normalization across channels (LRN), batch normalization
  • Activation: rectified linear unit (ReLU)
  • Data manipulation: multi-dimensional transposition (conversion), split, concat, sum and scale.

Refer to this article for more details on these Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) optimized primitives.

In TensorFlow, we implemented Intel optimized versions of operations to make sure that these operations can leverage Intel MKL-DNN primitives wherever possible. While, this is a necessary step to enable scalable performance on Intel® architecture, we also had to implement a number of other optimizations. In particular, Intel MKL uses a different layout than the default layout in TensorFlow for performance reasons. We needed to ensure that the overhead of conversion between the two formats is kept to a minimum. We also wanted to ensure that data scientists and other TensorFlow users don’t have to change their existing neural network models to take advantage of these optimizations.

Graph Optimizations

We introduced a number of graph optimization passes to:

  1. Replace default TensorFlow operations with Intel optimized versions when running on CPU. This ensures that users can run their existing Python programs and realize the performance gains without changes to their neural network model.
  2. Eliminate unnecessary and costly data layout conversions.
  3. Fuse multiple operations together to enable efficient cache reuse on CPU.
  4. Handle intermediate states that allow for faster backpropagation.

These graph optimizations enable greater performance without introducing any additional burden on TensorFlow programmers. Data layout optimization is a key performance optimization. Often times, the native TensorFlow data format is not the most efficient data layout for certain tensor operations on CPUs. In such cases, we insert a data layout conversion operation from TensorFlow’s native format to an internal format, perform the operation on CPU, and convert operation output back to the TensorFlow format. However, these conversions introduce a performance overhead and should be minimized. Our data layout optimization identifies sub-graphs that can be entirely executed using Intel MKL optimized operations and eliminates the conversions within the operations in the sub-graph. Automatically inserted conversion nodes take care of data layout conversions at the boundaries of the sub-graph. Another key optimization is the fusion pass that automatically fuses operations that can be run efficiently as a single Intel MKL operation.

Other Optimizations

We have also tweaked a number of TensorFlow framework components to enable the highest CPU performance for various deep learning models. We developed a custom pool allocator using existing pool allocator in TensorFlow. Our custom pool allocator ensures that both TensorFlow and Intel MKL share the same memory pools (using the Intel MKL imalloc functionality) and we don’t return memory prematurely to the operating system, thus avoiding costly page misses and page clears. In addition, we carefully tuned multiple threading libraries (pthreads used by TensorFlow and OpenMP used by Intel MKL) to coexist and not to compete against each other for CPU resources.

Performance Experiments

Our optimizations such as the ones discussed above resulted in dramatic performance improvements on both Intel Xeon and Intel Xeon Phi platforms. To illustrate the performance gains we report below our best known methods (or BKMs) together with baseline and optimized performance numbers for three common ConvNet benchmarks.

  1. The following parameters are important for performance on Intel Xeon (codename Broadwell) and Intel Xeon Phi (codename Knights Landing) processors and we recommend tuning them for your specific neural network model and platform. We have carefully tuned these parameters to gain maximum performance for convnet-benchmarks on both Intel Xeon and Intel Xeon Phi processors.
    1. Data format: we suggest that users can specify the NCHW format for their specific neural network model to get maximum performance. TensorFlow default NHWC format is not the most efficient data layout for CPU and it results in some additional conversion overhead.
    2. Inter-op / intra-op: we also suggest that data scientists and users experiment with the intra-op and inter-op parameters in TensorFlow for optimal setting for each model and CPU platform. These settings impact parallelism within one layer as well as across layers.
    3. Batch size: batch size is another important parameter that impacts both the available parallelism to utilize all the cores as well as working set size and memory performance in general.
    4. OMP_NUM_THREADS: maximum performance requires using all the available cores efficiently. This setting is especially important for performance on Intel Xeon Phi processors since it controls the level of hyperthreading (1 to 4).
    5. Transpose in Matrix multiplication: for some matrix sizes, transposing the second input matrix b provides better performance (better cache reuse) in Matmul layer. This is the case for all the Matmul operations used in the three models below. Users should experiment with this setting for other matrix sizes.
    6. KMP_BLOCKTIME: users should experiment with various settings for how much time each thread should wait after completing the execution of a parallel region, in milliseconds.

Example settings on Intel® Xeon® processor (codename Broadwell - 2 Sockets - 22 Cores)

Example settings on Intel® Xeon Phi™ processor (codename Knights Landing - 68 Cores)

  1. Performance results on Intel® Xeon® processor (codename Broadwell – 2 Sockets – 22 Cores)

  2. Performance results on Intel® Xeon Phi™ processor (codename Knights Landing – 68 cores)

  3. Performance results with different batch sizes on sizes on Intel® Xeon® processor (codename Broadwell) and Intel® Xeon Phi™ processor (codename Knights Landing) - Training

Building and Installing TensorFlow with CPU Optimizations

  1. Run "./configure" from the TensorFlow source directory, and it will download latest Intel MKL for machine learning automatically in tensorflow/third_party/mkl/mklml if you select the options to use Intel MKL.
  2. Execute the following commands to create a pip package that can be used to install the optimized TensorFlow build.
    • PATH can be changed to point to a specific version of GCC compiler:
      export PATH=/PATH/gcc/bin:$PATH
    • LD_LIBRARY_PATH can also be changed to point to new GLIBC :
      export LD_LIBRARY_PATH=/PATH/gcc/lib64:$LD_LIBRARY_PATH.
    • Build for best performance on Intel Xeon and Intel Xeon Phi processors:
      bazel build --config=mkl --copt=”-DEIGEN_USE_VML” -c opt //tensorflow/tools/pip_package:
      build_pip_package
  3. Install the optimized TensorFlow wheel
    1. bazel-bin/tensorflow/tools/pip_package/build_pip_package ~/path_to_save_wheel
      pip install --upgrade --user ~/path_to_save_wheel /wheel_name.whl

System Configuration

What It Means for AI

Optimizing TensorFlow means deep learning applications built using this widely available and widely applied framework can now run much faster on Intel processors to increase flexibility, accessibility, and scale. The Intel Xeon Phi processor, for example, is designed to scale out in a near-linear fashion across cores and nodes to dramatically reduce the time to train machine learning models. And TensorFlow can now scale with future performance advancements as we continue enhancing the performance of Intel processors to handle even bigger and more challenging AI workloads.

The collaboration between Intel and Google to optimize TensorFlow is part of ongoing efforts to make AI more accessible to developers and data scientists, and to enable AI applications to run wherever they’re needed on any kind of device—from the edge to the cloud. Intel believes this is the key to creating the next-generation of AI algorithms and models to solve the most pressing problems in business, science, engineering, medicine, and society.

This collaboration already resulted in dramatic performance improvements on leading Intel Xeon and Intel Xeon Phi processor-based platforms. These improvements are now readily available through Google’s TensorFlow GitHub repository. We are asking the AI community to give these optimizations a try and are looking forward to feedback and contributions that build on them.

Wind River Helix* Device Cloud Application Deployment: POC Retail Vending Machine

$
0
0

Intro

Securely and easily deploying an IOT software solution to multiple gateways across the world can be a challenge. However, for gateways running Wind River Helix* Device Cloud there is a clear path to follow that diminishes the challenge. The Wind River Helix* Device Cloud allows for complete device lifecycle management, from deploying, to monitoring, to updating, to decommissioning. It has telemetry capabilities as well, allowing it to receive and store data in the cloud, as well as act on it using rules and alerts. This article will explore a proof of concept that deploys software to vending machine gateways using the Helix Device Cloud (HDC).

To learn more about the Helix Device Cloud:

https://www.helixdevicecloud.com

High level component diagram with Arduino 101* (branded Genuino 101* outside the U.S.) and Intel® NUC

Figure 1: High level component diagram with Arduino 101* (branded Genuino 101* outside the U.S.) and Intel® NUC

Set-up

This article assumes that chocolate bar vending machines have been deployed in various locations, that they’re controlled by a gateway with the HDC agent installed, and that they are properly configured. The POC uses the Intel® NUC (NUC5I3MYHE) running Ubuntu* 16.04 as the gateway with HDC 2.2.1 installed and an Arduino 101 on a USB port with Grove* sensors from Seeed* Studio acting as the vending machine sensors. The Arduino 101 has a touch sensor to indicate a purchase of the product; a green LED turns on when purchase is successful and a red LED turns on when the product is out of stock. A temperature sensor will monitor the vending machine’s temperature to see if the chocolate bars are in danger of melting. In addition, it has a motion sensor to count traffic passing by the vending machine which turns on a blue LED when motion is detected. The software for the vending machine is written in Python* and uses the HDC iot_python module.

For instructions on how to install and configure the HDC Agent on Ubuntu, refer to this guide in the Wind River Knowledge Library:

http://knowledge.windriver.com/en-us/000_Products/040/050/020/000_Wind_River_Helix_Device_Cloud_Getting_Started/060

 

To interface the Arduino 101 board’s sensors with the gateway, MRAA needs to be installed on the gateway:

sudo add-apt-repository ppa:mraa/mraa
sudo apt-get update
sudo apt-get install libmraa1 libmraa-dev mraa-tools python-mraa python3-mraa

Code 1: commands to install MRAA on Ubuntu

The Arduino 101 must also be running the StandardFirmata sketch. That sketch comes with the Arduino IDE under ExamplesàFirmata à StandardFirmata.

 

Vending Machine Telemetry

The data collected from the vending machine is where the real value comes into play. The gateway application will collect motion, temperature, and inventory data, and send it to the Helix Device Cloud. The application is a python script ‘VendingMachine.py’ that will be turned into a service. Then in HDC, a variety of rules and alerts can be set up to handle the values coming in. For example, if inventory runs out, a rule can trigger more inventory to be sent out to the machine.

The Arduino 101’s sensors will supply the data to upload. To interface with it through the USB port add the line below in the code to tie into MRAA and Firmata*. Firmata will allow the board to talk to the gateway and MRAA handles to IO pin communications. Note that root access is required to access the USB port by default, so when running the python script locally, it must be ‘sudo python VendingMachine.py’.

# Interface with Arduino 101 board
mraa.addSubplatform(mraa.GENERIC_FIRMATA, "/dev/ttyACM0")

Code 2: line to have MRAA use Firmata

Using Firmata will shift all the pin numbers by 512, so pin A3 for the temperature sensor is really pin 512 + 3. 

Arduino 101 pins:

Temperature sensor: A3

Touch sensor: D3

Motion sensor: D7

Blue motion indicator LED: D2

Red out of stock indicator LED: D5

Green purchase indicator LED: D6

temperature_sensor = mraa.Aio(512 + 3)
touch_sensor = mraa.Gpio(512 + 3)
touch_sensor.dir(mraa.DIR_IN)
motion_sensor = mraa.Gpio(512 + 7)
motion_sensor.dir(mraa.DIR_IN)
blue_motion_led = mraa.Gpio(512 + 2)
blue_motion_led.dir(mraa.DIR_OUT)
red_stock_led = mraa.Gpio(512 + 5)
red_stock_led.dir(mraa.DIR_OUT)
green_stock_led = mraa.Gpio(512 + 6)
green_stock_led.dir(mraa.DIR_OUT)

Code 3: Arduino 101 sensor initialization code

The program’s loop will compile the sensor data, handle items being purchased, and then send that data to HDC every minute.

count = 0
        while ( running ):
            #motion sensor
            current_motion = motion_sensor.read()
            if (current_motion):
                print "Detecting moving object"
                blue_motion_led.write(1)
                motion += 1
            else:
                blue_motion_led.write(0)
    
            #temperature sensor
            fahrenheit = 0
            raw_Temp = temperature_sensor.read()
            if raw_Temp> 0 :
                resistance = (1023-raw_Temp)*10000.0/raw_Temp
                celsius = 1/(math.log(resistance/10000.0)/B+1/298.15)-273.15
                fahrenheit = (1.8 * celsius) + 32
            if fahrenheit > temperature:
                temperature = fahrenheit 
            #purchase flow
            green_stock_led.write(0)
            customer_purchase = touch_sensor.read()
            if (num_chocobars > 0):
                red_stock_led.write(0)
                if (customer_purchase):
                    print "Customer purchasing item"
                    green_stock_led.write(1)
                    num_chocobars -= 1
            else:
                red_stock_led.write(1)

            #send telemetery every 10 seconds
            if (count%POLL_INTERVAL_SEC==0):
                send_telemetry_sample()
            count += 1
            sleep(1)

Code 4: The main loop of the program

To send telemetry to HDC, there are three required steps in the code for each metric: local memory needs to be allocated for it, the metric must be registered with the HDC agent, and the data needs to be sent. Refer to the condensed code below. In the actual program, the code will allocate and initialize all the sensors in the initialize() method and submit the telemetry data in the send_telemetry_sample() method. Refer to the end of the article for the full code. Following HDC’s recommendations for sending telemetry, data is only sent once every minute and only if the value has changed. This will also prevent alerts from being triggered multiple times unnecessarily.

telemetry_motion = None
telemetry_motion = iot_telemetry_allocate( iot_lib_hdl, "motion", IOT_TYPE_INT64 )

iot_telemetry_register( telemetry_motion, None, 0 )

iot_telemetry_publish( telemetry_motion, None, 0, IOT_TYPE_INT64, motion )

Code 5: code needed for each telemetry metric

Registered telemetry items can be seen in the device’s dashboard in the Helix Device Cloud and can be viewed in graph form by expanding each metric.

Helix Device Cloud’s device dashboard

Figure 2: Helix Device Cloud’s device dashboard

Expanding each telemetry item, the data can be viewed in graph form.

Temperature data graph in Helix Device Cloud

Figure 3: Temperature data graph in Helix Device Cloud

Rules and Alerts

Now that the data is being sent to the cloud, the rules and alerts feature can be leveraged. These will help monitor the vending machine when data is received for conditions that require attention.

The vending machine needs to send out an alert if the temperature gets too high, as the chocolate inside might melt. To create a new rule, go to the ‘Rules’ tab and click ‘CREATE NEW RULE’.

Create a new rule

Figure 4: Create a new rule

1) Name the rule and select the device or devices to deploy the rule on. To deploy to a large group of devices at once, say all the vending machine gateways, use the more generic device variables on the left.

select devices for the rule

Figure 5: select devices for the rule

2) From there select the ‘temp’ telemetry item. Note that the telemetry gathering program must be running at the time of rule creation, otherwise the telemetry choices will be blank.

select telemetry metric to monitor

Figure 6: select telemetry metric to monitor

3) Once selected, set the conditions to greater than or equal to 90, as chocolate melts at 90 degrees.

set conditions for the telemetry metric

Figure 7: set conditions for the telemetry metric

4) Then set up the rule response, in this case it will create a priority one alert that the chocolate is melting.

set up an alert

Figure 8: set up an alert

Now when the temperature gets to 90 degrees, an alert will be created in the ‘Alerts’ tab.

alerts in Helix Device Cloud

Figure 9: alerts in Helix Device Cloud

The condition also gives the option of sending an email or forwarding the data to a specified MQTT topic. Additionally it could trigger a device action which will be used in the next example alert for low inventory.

While the other rule responses can be completely managed in HDC, a device action requires additional code on the device side. The gateway application code initiates and receives the action sent from HDC. The action in this case will be a simple integer called ‘action_restock’, however HDC can also handle triggering a script or other system command.

1) To begin, allocate and register the action_restock:

#  Allocate action
restock_cmd = iot_action_allocate( iot_lib_hdl, "action_restock" )

 # Restock action
iot_action_parameter_add( restock_cmd,
PARAM_STOCK_NAME, IOT_PARAMETER_IN, IOT_TYPE_INT32, 0 )

Code 6: code for an HDC action

2) Next, add a callback defining what to do when the action is received from the gateway. Here it is mimicking restocking the chocolate bars, so the sent number will be added to the current stock value.

def on_action_restock( request ):
    '''Callback function for testing parameters'''
    result = IOT_STATUS_SUCCESS
    status = IOT_STATUS_FAILURE
    global num_chocobars
    chocobarShipment = 0

    IOT_LOG( iot_lib_hdl, IOT_LOG_INFO, "on_action_restock invoked\n\n")

    # int
    if ( result == IOT_STATUS_SUCCESS ):
        ( status, chocobarShipment ) = iot_action_parameter_get( request,
                PARAM_STOCK_NAME, False, IOT_TYPE_INT32 )
        if ( status != IOT_STATUS_SUCCESS ):
            result = IOT_STATUS_BAD_PARAMETER
            IOT_LOG( iot_lib_hdl, IOT_LOG_ERROR,
                    "get param failed for {}\n".format( PARAM_STOCK_NAME ) )
        else:
            IOT_LOG( iot_lib_hdl, IOT_LOG_INFO,
                    "{} success, value = {}\n".format(
                    PARAM_STOCK_NAME, chocobarShipment ) )
            num_chocobars = num_chocobars + chocobarShipment

    return result

Code 7: code to handle action received from HDC

3) Next, the action needs to be configured in HDC as well. Follow the above telemetry steps but choose ‘Add a Device Action’ instead. And like telemetry, actions are only available when the program is running on the device.

action conditions and responses

Figure 10: action conditions and responses

Now the gateway has two active rules applied to it: Restock Machine and ChocolateMeltingPoint.

Rules in HDC

Figure 11: Rules in HDC

Deployment

With the code complete, it can be turned into a service running continuously on the gateway, which can be deployed using the Helix Device Cloud to all the vending machine gateways. To start, create a new package under the ‘Updates’ tab.

Create an update package

Figure 12: Create an update package

1) Name the package, enter the version number, and select the device compatibility criteria to narrow down the list of devices to deploy to. The files for the program need to be uploaded and attached to the package as well: VendingMachine.py and HDC_VendingMachine.service.

HDC_VendingMachine.service file should look like the below code. The service should start after the iot.service starts as that is the HDC agent on the gateway and will start the python code. Note that the python file will need to be moved out of the initial download location as it will be erased by any subsequent package deployments. In addition, the first line of the VendingMachine.py file needs to be ‘#!/usr/bin/python’ for the service to be able to ExecStart it.

Unit]
Description=HDC POC
After=iot.service
 
[Service]
ExecStart=/home/whitney/Desktop/VendingMachineApp/VendingMachine.py
Restart=always
User=root
TimeoutStartSec=240
TimeoutStopSec=15
KillMode=process
KillSignal=SIGINT
 
[Install]
WantedBy=multi-user.target

Code 8: HDC_Vendingmachine.service file

Parameters of the update package

Figure 13: Parameters of the update package

The cloud package can also execute commands at various parts of the install.

2) For pre-install the directory to store the code needs to be created.

sudo mkdir /usr/bin/VendingMachineApp

Code 9: Pre-install command in HDC

3) During the install, make the python file executable, and move all the files to their final destination. Note that HDC does commands as user ‘iot’, however the python script needs to run as root to have access to USB. The HDC_VendingMachine.service file already has the user as root. To avoid permission conflicts, the chmod must be done as user iot while the file is owned by user iot. Then after the sudo cp takes place the owner will change to root.

chmod +x /var/lib/iot/update/download/VendingMachine.py
sudo cp /var/lib/iot/update/download/VendingMachine.py /usr/bin/VendingMachineApp/
sudo cp /var/lib/iot/update/download/HDC_VendingMachine.service /lib/systemd/system/

Code 10: Install commands in HDC

4) Post-install commands will enable and start the service.

sudo systemctl enable HDC_VendingMachine.service
sudo systemctl start HDC_VendingMachine.service

Code 11: Post-install commands in HDC

Install commands in HDC

Figure 14: Install commands in HDC

5) Save the package and wait for it to finish. Then it is ready to be deployed by clicking on ‘Deploy’.

Saved update package in HDC

Figure 15: Saved update package in HDC

6) The Compatible Device list is pre-populated based on the device conditions specified in the package. Select and add the desired devices for the deployment, then click ‘Deploy’.

Select devices to deploy package to

Figure 16: Select devices to deploy package to

7) The status will show as ‘In Progress’ and then to change to ‘Completed’.

Completed deployment

Figure 17: Completed deployment

8) On the gateway, check the status of the service with the command below and refer to the syslog in ‘/var/log/syslog’ for any errors starting the service and ‘var/lib/iot/update/iot_install_updates.log’ for errors with the install itself.

systemctl status HDC_VendingMachine

Code 12: Check HDC_VendingMachine service status

Full Code

#!/usr/bin/python

import os
import sys
import signal
import inspect
import math
import mraa

from time import sleep
sys.path.append( "../lib" )
from iot_python import *

B=3975

# Interface with Arduino 101 board
mraa.addSubplatform(mraa.GENERIC_FIRMATA, "/dev/ttyACM0")

temperature_sensor = mraa.Aio(512 + 3)
touch_sensor = mraa.Gpio(512 + 3)
touch_sensor.dir(mraa.DIR_IN)
motion_sensor = mraa.Gpio(512 + 7)
motion_sensor.dir(mraa.DIR_IN)
blue_motion_led = mraa.Gpio(512 + 2)
blue_motion_led.dir(mraa.DIR_OUT)
red_stock_led = mraa.Gpio(512 + 5)
red_stock_led.dir(mraa.DIR_OUT)
green_stock_led = mraa.Gpio(512 + 6)
green_stock_led.dir(mraa.DIR_OUT)

POLL_INTERVAL_SEC = 60
MAX_LOOP_ITERATIONS = 360
TAG_MAX_LEN = 128

# Set up named parameters for a sample action to validate actions with
# parameters
PARAM_STOCK_NAME = "# Chocobars to Ship"

#  telemetry data
telemetry_motion = None
telemetry_temp = None
telemetry_stock_chocobars = None

previous_numchocobars= 0
previous_motion = 1000
previous_temperature= 0

running = True
iot_lib_hdl = None
restock_cmd = None

def debug_log( log_level, source, msg ):
    '''Debug log wrapper for printing, used for callbacks'''
    i = 0
    prefix = ["FATAL","ALERT","CRITICAL","ERROR","WARNING",
            "NOTICE","INFO","DEBUG","TRACE"]
    # ensure log level is a valid enumeration value
    if ( log_level <= IOT_LOG_TRACE ):
        i = log_level
    print( "{}: {}".format( prefix[i], msg ) )


def IOT_LOG( handle, level, msg ):
    '''Logging function with support for call location'''
    # previous function call
    callerframerecord = inspect.stack()[1]
    # callerframrecord :  1 = function, 3 = file, 2 = line
    iot_log( handle, level, callerframerecord[1], callerframerecord[3],
            callerframerecord[2], msg )


def initialize():
    '''Connects to the agent and registers all actions and telemetry'''
    global telemetry_motion, telemetry_temp, telemetry_stock_chocobars
    global iot_lib_hdl
    global restock_cmd
    result = False
    status = IOT_STATUS_FAILURE

    iot_lib_hdl = iot_initialize( "complete-app-py", None, 0 )
    iot_log_callback_set( iot_lib_hdl, debug_log )
    status = iot_connect( iot_lib_hdl, 0 )
    if ( status == IOT_STATUS_SUCCESS ):
        IOT_LOG( iot_lib_hdl, IOT_LOG_INFO, "Connected" )

        # Allocate telemetry items
        telemetry_motion = iot_telemetry_allocate( iot_lib_hdl,
                "motion", IOT_TYPE_INT64 )
        telemetry_temp = iot_telemetry_allocate( iot_lib_hdl,
                "temp", IOT_TYPE_FLOAT64 )
        iot_telemetry_attribute_set( telemetry_temp,
                "udmp:units", IOT_TYPE_STRING, "Fahrenheit" )
        telemetry_stock_chocobars = iot_telemetry_allocate( iot_lib_hdl,
                "stock chocobars", IOT_TYPE_INT64 )

        # Register telemetry items
        IOT_LOG( iot_lib_hdl, IOT_LOG_INFO, "Registering telemetry: {}".format(
                "motion" ) )
        iot_telemetry_register( telemetry_motion, None, 0 )
        IOT_LOG( iot_lib_hdl, IOT_LOG_INFO, "Registering telemetry: {}".format(
                "temp" ) )
        iot_telemetry_register( telemetry_temp, None, 0 )
        IOT_LOG( iot_lib_hdl, IOT_LOG_INFO, "Registering telemetry: {}".format(
                "stock chocobars" ) )
        iot_telemetry_register( telemetry_stock_chocobars, None, 0 )
  

        #  Allocate action
        IOT_LOG( iot_lib_hdl, IOT_LOG_INFO,
                "Registering action test_parameters\n" )
        restock_cmd = iot_action_allocate( iot_lib_hdl, "action_restock" )

        # Restock action
        iot_action_parameter_add( restock_cmd,
            PARAM_STOCK_NAME, IOT_PARAMETER_IN, IOT_TYPE_INT32, 0 )

        #validate action registration
        status = iot_action_register_callback(restock_cmd,
                on_action_restock, None, 0 )
        if ( status != IOT_STATUS_SUCCESS ):
            IOT_LOG( iot_lib_hdl, IOT_LOG_ERROR,
                    "Failed to register command. Reason: {}".format(
                    iot_error( status ) ) )
    else:
        IOT_LOG( iot_lib_hdl, IOT_LOG_ERROR, "Failed to connect" )
    if ( status == IOT_STATUS_SUCCESS ):
        result = True
    return result

def on_action_restock( request ):
    '''Callback function for testing parameters'''
    result = IOT_STATUS_SUCCESS
    status = IOT_STATUS_FAILURE
    global num_chocobars
    chocobarShipment = 0

    IOT_LOG( iot_lib_hdl, IOT_LOG_INFO, "on_action_restock invoked\n\n")

    # int
    if ( result == IOT_STATUS_SUCCESS ):
        ( status, chocobarShipment ) = iot_action_parameter_get( request,
                PARAM_STOCK_NAME, False, IOT_TYPE_INT32 )
        if ( status != IOT_STATUS_SUCCESS ):
            result = IOT_STATUS_BAD_PARAMETER
            IOT_LOG( iot_lib_hdl, IOT_LOG_ERROR,
                    "get param failed for {}\n".format( PARAM_STOCK_NAME ) )
        else:
            IOT_LOG( iot_lib_hdl, IOT_LOG_INFO,
                    "{} success, value = {}\n".format(
                    PARAM_STOCK_NAME, chocobarShipment ) )
            num_chocobars = num_chocobars + chocobarShipment

    return result


def send_telemetry_sample():
    '''Send telemetry data to the agent'''
    global num_chocobars, motion, temperature
    global previous_numchocobars, previous_motion, previous_temperature

    IOT_LOG( iot_lib_hdl, IOT_LOG_INFO,
        "{}\n".format("+--------------------------------------------------------+"))

    if previous_motion != motion:
        IOT_LOG( iot_lib_hdl, IOT_LOG_INFO, "Sending motion  : {}".format(motion) );
        iot_telemetry_publish( telemetry_motion, None, 0, IOT_TYPE_INT64, motion )
        previous_motion = motion
    motion = 0

    if previous_temperature != temperature:
        IOT_LOG( iot_lib_hdl, IOT_LOG_INFO, "Sending temp  : {}".format(temperature) );
        iot_telemetry_publish( telemetry_temp, None, 0, IOT_TYPE_FLOAT64, temperature )
        previous_temperature = temperature
    temperature = 0

    if previous_numchocobars != num_chocobars:
        IOT_LOG( iot_lib_hdl, IOT_LOG_INFO, "Sending chocobars stock : {}".format(num_chocobars) );
        iot_telemetry_publish( telemetry_stock_chocobars, None, 0, IOT_TYPE_INT64, num_chocobars )
        previous_numchocobars = num_chocobars
 

def sig_handler( signo, frame ):
    '''Handles terminatation signal and tears down gracefully'''
    global running
    if ( signo == signal.SIGINT ):
        IOT_LOG( iot_lib_hdl, IOT_LOG_INFO, "Received termination signal...\n" )
        running = False

if ( __name__ == '__main__' ):
    global motion, num_chocobars, temperature
    motion = 0
    num_chocobars = 2
    temperature = 0
    if ( initialize() == IOT_TRUE ):
        signal.signal( signal.SIGINT, sig_handler )

        IOT_LOG( iot_lib_hdl, IOT_LOG_INFO, "Sending telemetry..." )

        count = 0
        while ( running ):
            #motion sensor
            current_motion = motion_sensor.read()
            if (current_motion):
                print "Detecting moving object"
                blue_motion_led.write(1)
                motion += 1
            else:
                blue_motion_led.write(0)
    
            #temperature sensor
            fahrenheit = 0
            raw_Temp = temperature_sensor.read()
            if raw_Temp> 0 :
                resistance = (1023-raw_Temp)*10000.0/raw_Temp
                celsius = 1/(math.log(resistance/10000.0)/B+1/298.15)-273.15
                fahrenheit = (1.8 * celsius) + 32
            if fahrenheit > temperature:
                temperature = fahrenheit 
            #purchase flow
            green_stock_led.write(0)
            customer_purchase = touch_sensor.read()
            if (num_chocobars > 0):
                red_stock_led.write(0)
                if (customer_purchase):
                    print "Customer purchasing item"
                    green_stock_led.write(1)
                    num_chocobars -= 1
            else:
                red_stock_led.write(1)

            #send telemetery every 10 seconds
            if (count%POLL_INTERVAL_SEC==0):
                send_telemetry_sample()
            count += 1
            sleep(1)

    #  Terminate
    IOT_LOG( iot_lib_hdl, IOT_LOG_INFO, "Exiting..." )
    iot_terminate( iot_lib_hdl, 0 )
    exit( 0 )

Code 13: VendingMachine.py file

Summary

Our vending machine code has now been successfully deployed using HDC. Temperature and stock data is being monitored with automated rules. Motion data can be referenced as time goes on to monitor foot traffic around the vending machine. Any future updates to the program and overall gateway health can be deployed and monitored using HDC.

To purchase HDC visit https://www.windriver.com/company/contact/index.html or email sales@windriver.com

References

https://www.windriver.com/products/helix/device-cloud/

http://knowledge.windriver.com/en-us/000_Products/040/050/020/000_Wind_River_Helix_Device_Cloud_Getting_Started/060

 

 

About the author

Whitney Foster is a software engineer at Intel in the Software Solutions Group working on scale enabling projects for Internet of Things.

 

Notices

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein.

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.

The products and services described may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are available on request.

Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting www.intel.com/design/literature.htm.

Intel, Intel RealSense, Intel Edison. and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others

© 2017 Intel Corporation.

Viewing all 1201 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>